Learning the parts of objects by
nonnegative matrix factorization
D.D. Lee from Bell Lab
H.S. Seung from MIT
Presenter: Zhipeng Zhao
Introduction
• NMF (Nonnegative Matrix Factorization):
Theory: Perception of the whole is based on
perception of its parts.
• Comparison with another two matrix factorization
methods:
PCA (Principle Components Analysis)
VA (Vector quantization )
Comparison:
• Common features:
– Represent a face as a linear combination of basis
images.
– Matrix factorization: VWH
V: nm matrix. Each column of which contains n
nonnegative pixel values of one of the m facial images.
W: (n r): r columns of W are called basis images.
H: (r m): each column of H is called encoding.
Comparison (cont’d)
NMF
Representation: parts- Based
PCA
holistic
VQ
holistic
Basis Image: localized features eigenfaces
whole face
Constrains on
W and H:
each face is
each column of H is
allow multiple
basis images to approximated by constrained to be a
represent a face, a linear combi-
unary vector, every
face is approximat-
but only additive nation of all
combinations
the eigenfaces
ed by a single basis
image.
Implementation of NMF
• Iterative algorithm:
Implementation (cont’d)
• Objective function:
Updates: converges to a local maximum of the
objective function. ( related to the likelihood of
generating the images in V from the basis W and
encoding H.
Network model of NMF
Semantic analysis of text doc. using NMF
• A corpus of documents summarized by matrix V,
where Vi is the number of times the ith word in
the vocabulary appears in the th document.
• NMF algorithm involves finding the approximate
factorization of this Matrix VWH into a feature
set W and hidden variables H, in the same way as
was done for faces.