快樂很偉大

2013年7月18日 星期四

[論文補充閱讀] Natural and Seamless Image Composition With Color Control

Gradient-Domain Fusion

image gradient

GrabCut

Dirichlet boundary condition

Weight function

加權平均數

Random walk

Theorema graph is transient if and only if the resistance between a point and infinity is finite. It is not important which point is chosen if the graph is connected.
In other words, in a transient system, one only needs to overcome a finite resistance to get to infinity from any point. In a recurrent system, the resistance from any point to infinity is infinite.
This characterization of recurrence and transience is very useful, and specifically it allows us to analyze the case of a city drawn in the plane with the distances bounded.
A random walk on a graph is a very special case of a Markov chain. Unlike a general Markov chain, random walk on a graph enjoys a property called time symmetry or reversibility. Roughly speaking, this property, also called the principle of detailed balance, means that the probabilities to traverse a given path in one direction or in the other have a very simple connection between them (if the graph is regular, they are just equal). This property has important consequences.
Starting in the 1980s, much research has gone into connecting properties of the graph to random walks. In addition to the electrical network connection described above, there are important connections to isoperimetric inequalities, see more here, functional inequalities such as Sobolev and Poincaré inequalities and properties of solutions of Laplace's equation. A significant portion of this research was focused on Cayley graphs of finitely generated groups. For example, the proof of Dave Bayer and Persi Diaconis that 7riffle shuffles are enough to mix a pack of cards (see more details under shuffle) is in effect a result about random walk on the groupSn, and the proof uses the group structure in an essential way. In many cases these discrete results carry over to, or are derived frommanifolds and Lie groups.
A good reference for random walk on graphs is the online book by Aldous and Fill. For groups see the book of Woess. If the transition kernel p(x,y) is itself random (based on an environment \omega) then the random walk is called a "random walk in random environment". When the law of the random walk includes the randomness of \omega, the law is called the annealed law; on the other hand, if \omega is seen as fixed, the law is called a quenched law. See the book of Hughes or the lecture notes of Zeitouni.
We can think about choosing every possible edge with the same probability as maximizing uncertainty (entropy) locally. We could also do it globally – in maximal entropy random walk (MERW) we want all paths to be equally probable, or in other words: for each two vertexes, each path of given length is equally probable. This random walk has much stronger localization properties.


影像與頻率

人類的肉眼對於影像中低頻的部分較為敏感,只要在低頻的地方,稍做資料的變
更,人類的眼睛必能察覺出來,因此我們通常說影像重要的部分在低頻的地方

Gaussian and Laplacian Pyramids

圖像拉普拉斯金字塔融合(Laplacian Pyramid Blending)

图像金字塔

  • 拉普拉斯金字塔(Laplacian pyramid): 用来从金字塔低层图像重建上层未采样图像
Boundary value problem

image pyramid

Laplace算子和Laplacian矩陣

ch06_邊緣偵測_ laplace operator   ★ great!!!!

gradient field

  • The gradient (or gradient vector field) 
  • The gradient of the function f(x,y) = −(cos2x + cos2y)2 depicted as a vector field on the bottom plane

Poisson Image Editing ppt  

Poisson equation

Surface Reconstruction[edit]

Poisson's equation is also used to reconstruct a smooth 2D surface (in the sense of curve fitting) based on a large number of points pi(a point cloud) where each point also carries an estimate of the local surface normal ni.[1]
This technique reconstructs the implicit function f whose value is zero at the points pi and whose gradient at the points pi equals the normal vectors ni. The set of (pini) is thus a sampling of a continuous vector field V. The implicit function f is found by integrating the vector field V. Since not every vector field is the gradient of a function, the problem may or may not have a solution: the necessary and sufficient condition for a smooth vector field V to be the gradient of a function f is that the curl of V must be identically zero. In case this condition is difficult to impose, it is still possible to perform a least-squares fit to minimize the difference between V and the gradient of f.

線性內插與加權平均數的概念

從機率學的角度來看,加權平均數就是期望值。
從統計學的角度來看,加權平均數就是線性內插。





沒有留言:

張貼留言