A twofold generalization of the classical continuous Hopfield neural network for modelling constrained optimization problems is proposed. On the one hand, non-quadratic cost functions are admitted corresponding to non-linear output summation functions in the neurons. On the other hand it is shown under which conditions various (new) types of constraints can be incorporated directly. The stability properties of several relaxation schemes are shown. If a direct incorporation of the constraints appears to be impossible, the Hopfield-Lagrange model can be applied, the stability properties of which are analyzed as well. Another good way to deal with constraints is by means of dynamic penalty terms, using mean field annealing in order to end up in a feasible solution. A famous example in this context is the elastic net, although it seems impossible - contrary to what is suggested in the literature - to derive the architecture of this network from a constrained Hopfield model. Furthermore, a non-equidistant elastic net is proposed and its stability properties are compared to those of the classical elastic network. In addition to certain simulation results as known from the literature, most theoretical statements of this paper are validated with simulations of toy problems while in some cases, more sophisticated combinatorial optimization problems have been tried as well. In the final section, we discuss the possibilities of applying the various models in the area of constrained optimization. It is also demonstrated how the new ideas as inspired by the analysis of generalized continuous Hopfield models, can be transferred to discrete stochastic Hopfield models. By doing so, simulating annealing can be exploited in other to improve the quality of solutions. The transfer also opens new avenues for continued theoretical research.