One of the most prominent topics today in the field of computer vision is that of facial analysis. In particular, the detection and location of human faces in images and the biometric analysis of them are topics that have raised great interest due to the number of industrial applications that make use of them. This doctoral dissertation carries out an independent study of the problems derived from two topics: face detection with eye location and face recognition using a local texture feature-based approach. The algorithms developed are focused on overcoming the problem of extracting the identity from a face image (in frontal or semi-frontal views) in semi-controlled scenarios. The goal was to develop robust algorithms readily applicable to real applications, such as advanced banking security and the definition of marketing strategies based on client statistics. Regarding the extraction of local textures, an in-depth study is performed on some of the most extended features, taking into special consideration the Histograms of Oriented Gradients (HOG descriptors). Working with normalized face representations, these descriptors offer discriminative information about key facial landmarks (such as the eyes, the mouth, etc.), being robust to illumination variations and small displacements. Various classification algorithms have been considered for face detection and recognition, all following a supervised learning strategy. Specifically, some boosting and Support Vector Machines (SVM) classifiers have been used to classify local textures extracted from the eyes (i.e. HOG descriptors), for eye location. In the case of face recognition, a novel feature-based algorithm has been developed, HOG-EBGM (HOG on Elastic Bunch Graph Matching). Given a face, the main steps of HOG-EBGM can be summarized in the following: first, a facial graph is automatically built, locating some facial keypoints; from each of these points, a HOG local descriptor is extracted and all of them concatenated; the final biometric face vector is obtained applying dimensionality reduction techniques; finally, the samples are matched to a database using a Nearest Neighbor approach. Performing on the database of FRGC, the eyes were localized with a precision of 92.3% with an error lower than 5% of the inter-ocular distance, overpassing the results obtained by some referent authors. Regarding face recognition, using the FERET database, it has been proved that our use of HOG local descriptors provides more biometric information than other classical descriptors, such as Gabor coefficients (improving the recognition rate up to a 40% in some cases). Using the HOG-EBGM algorithm for the localization of facial landmarks produced simmilar results to other extended algorithms, such as the Active Appearance Models (AAM). Finally, the experiments have shown that the inclusion of color cues in HOG features provides with more information useful for face recognition, improving the recognition rates when using FRGC up to a 11% compared to the use of the descriptors with gray-scale images. To evaluate the automatized HOG-EBGM for face recognition we also participated in the international MOBIO contest. MOBIO provided a database of video samples in realistic scenarios (recorded with a mobile device), and offered an excellent context to compare our solutions with those of the different participants. In the evalution of the MOBIO results, HOG-EBGM ranked as the fourth best solution among all participants.