Linear regression models describe a linear relationship between a response and one or more predictive terms. Many times, however, a nonlinear relationship exists. Generalized linear models have these characteristics: At each set of values for the predictors, the response has a distribution that can be normal, binomial, Poisson, gamma, or inverse Gaussian, ith parameters including a mean μ; A coefficient vector b defines a linear combination Xb of the predictors X; A link function f defines the model as f(μ) = Xb.The logistic regression is a particular case of a generalized linear model when the logit function is taken as a link function and when the response variable has a binomial probability distribution The Poisson regression is a particular case of the generalized linear model when a logarithmic function is taken as a link function and the response variable has a Poisson probability distribution.Decision trees, or classification trees and regression trees, predict responses to data. To predict a response, follow the decisions in the tree from the root (beginning) node downto a leaf node. The leaf node contains the response. Classification trees give responses that are nominal, such as 'true' or 'false'. Regression trees give numeric responses. Statistics and Machine Learning Toolbox trees are binary. Each step in a prediction involves checking the value of one predictor (variable).Discriminant analysis is a classification method. It assumes that differen classes generate data based on different Gaussian distributions. To train (create) a classifier, the fitting function estimates the parameters of a Gaussian distribution for each class (see "Creating Discriminant Analysis Model" ).-To predict the classes of new data, the trained classifier find the class with the smallest misclassification cost (see "Prediction Using Discriminant Analysis Models").Linear discriminant analysis is also known as the Fisher discriminant, named for its inventor.