Abstract: During the last ten years, there has been a significant progress with respect to the development and analysis of solutions to nonlinear inverse problems. This rapid expansion has been caused by requirements of applications that arise in natural sciences, engineering, imaging, and finance. The inverse problems can be written as operator equations F(x) = y with a nonlinear forward operator F mapping between the Hilbert or Banach spaces X and Y . In this context, x ∈ X is the non-observable element to be determined from noisy observations of the element y ∈ Y , which can be interpreted as an effect caused by x. Unfortunately, it is an intrinsic property of F to be ‘smoothing’, which means that during the transition from x to y information gets lost as is typical for procedures including integration. Consequently, the retrieval of x from y tends to become unstable as is typical for procedures including differentiation. This is the phenomenon of ill-posedness. We will outline in the first lecture different concepts of ill-posedness applying to inverse problems. As a consequence of ill-posedness the stable approximate solution of inverse problems requires regularization based on the substitution of the ill-posed original problem by a well-posed auxiliary problem. For a wide range of regularization methods that have been studied, an error analysis could be developed. In particular, to derive convergence rates for regularization methods some kind of smoothness of the solution is necessary. From this perspective, smoothness is a welcome property occurring in the solution process of inverse problems. This lecture series presents, in addition to theoretical ingredients, examples of the treatment of nonlinear applied inverse problems from technology, laser optics and from the financial markets. Extensions of regularization theory from a Hilbert space to a Banach space setting have also been established for linear inverse problems in recent years. We include this kind of theoretical progress in our discussions. One important approach for the treatment of inverse problems is based on Tikhonov regularization, in which case a one-parameter family of regularized solutions is obtained. We present in the second lecture the novel theory of variational source conditions for obtaining convergence rates in Tikhonov regularization for Hilbert spaces, reflexive Banach spaces and also some recent results on regularization in the non-reflexive Banach space `1. Moreover, it is crucial to choose the parameter appropriately. In this talk, a sequential variant of the discrepancy principle is analyzed. In many cases such parameter choice exhibits the so-called regularization property, that the chosen parameter tends to zero as the noise tends to zero, but slower than the noise level. It will be shown that such regularization property occurs under two natural assumptions. First, exact penalization must be excluded (exact penalization veto), and secondly, the discrepancy principle must stop after a finite number of iterations. For inverse problems with monotone forward operators, the simpler Lavrentiev regularization can be exploited for the stable approximate solution of corresponding linear and nonlinear operator equations. We present in the third lecture the theoretical background of this specific regularization method and recent research results in combination with concepts of conditional stability. Research presented in the lecture series is partially supported by Deutsche Forschungsgemeinschaft (DFG) under Grants HO 1454/8, 1454/9-1 and 1454/10-1.