API Reference¶
This reference provides detailed documentation for all the features in FEAT
feat.detector module¶
-
class
feat.detector.
Detector
(face_model='retinaface', landmark_model='mobilenet', au_model='rf', emotion_model='resmasknet', n_jobs=1)¶ Bases:
object
-
detect_aus
(frame, landmarks)¶ Detect Action Units from image or video frame
- Args:
frame (array): image loaded in array format (n, m, 3) landmarks (array): 68 landmarks used to localize face.
- Returns:
array: Action Unit predictions
- Examples:
>>> import cv2 >>> frame = cv2.imread(imgfile) >>> from feat import Detector >>> detector = Detector() >>> detector.detect_aus(frame)
-
detect_emotions
(frame, facebox, landmarks)¶ Detect emotions from image or video frame
- Args:
frame ([type]): [description] facebox ([type]): [description] landmarks ([type]): [description]
- Returns:
array: Action Unit predictions
- Examples:
>>> import cv2 >>> frame = cv2.imread(imgfile) >>> from feat import Detector >>> detector = Detector() >>> detected_faces = detector.detect_faces(frame) >>> detected_landmarks = detector.detect_landmarks(frame, detected_faces) >>> detector.detect_emotions(frame, detected_faces, detected_landmarks)
-
detect_faces
(frame)¶ Detect faces from image or video frame
- Args:
frame (array): image array
- Returns:
array: face detection results (x, y, x2, y2)
- Examples:
>>> import cv2 >>> frame = cv2.imread(imgfile) >>> from feat import Detector >>> detector = Detector() >>> detector.detect_faces(frame)
-
detect_image
(inputFname, outputFname=None, verbose=False)¶ Detects FEX from an image file.
- Args:
inputFname (str, or list of str): Path to image file or a list of paths to image files. outputFname (str, optional): Path to output file. Defaults to None.
- Rseturns:
Fex: Prediction results dataframe if outputFname is None. Returns True if outputFname is specified.
-
detect_landmarks
(frame, detected_faces)¶ Detect landmarks from image or video frame
- Args:
frame (array): image array detected_faces (array):
- Returns:
list: x and y landmark coordinates (1,68,2)
- Examples:
>>> import cv2 >>> frame = cv2.imread(imgfile) >>> from feat import Detector >>> detector = Detector() >>> detected_faces = detector.detect_faces(frame) >>> detector.detect_landmarks(frame, detected_faces)
-
detect_video
(inputFname, outputFname=None, skip_frames=1, verbose=False)¶ Detects FEX from a video file.
- Args:
inputFname (str): Path to video file outputFname (str, optional): Path to output file. Defaults to None. skip_frames (int, optional): Number of every other frames to skip for speed or if not all frames need to be processed. Defaults to 1.
- Returns:
dataframe: Prediction results dataframe if outputFname is None. Returns True if outputFname is specified.
-
extract_face
(frame, detected_faces, landmarks, size_output=112)¶ Extract a face in a frame with a convex hull of landmarks.
This function extracts the faces of the frame with convex hulls and masks out the rest.
- Args:
frame (array): The original image] detected_faces (list): face bounding box landmarks (list): the landmark information] size_output (int, optional): [description]. Defaults to 112.
- Returns:
resized_face_np: resized face as a numpy array new_landmarks: landmarks of aligned face
-
extract_hog
(frame, orientation=8, pixels_per_cell=8, 8, cells_per_block=2, 2, visualize=False)¶ Extract HOG features from a frame.
- Args:
frame (array]): Frame of image] orientation (int, optional): Orientation for HOG. Defaults to 8. pixels_per_cell (tuple, optional): Pixels per cell for HOG. Defaults to (8,8). cells_per_block (tuple, optional): Cells per block for HOG. Defaults to (2,2). visualize (bool, optional): Whether to provide the HOG image. Defaults to False.
- Returns:
hog_output: array of HOG features, and the HOG image if visualize is True.
-
process_frame
(frame, counter=0)¶ Helper function to run face detection, landmark detection, and emotion detection on a frame.
- Args:
frame (np.array): Numpy array of image, ideally loaded through Pillow.Image counter (int, str, default=0): Index used for the prediction results dataframe.
- Returns:
df (dataframe): Prediction results dataframe.
- Example:
>>> from pil import Image >>> frame = Image.open("input.jpg") >>> detector = Detector() >>> detector.process_frame(np.array(frame))
-
feat.data module¶
-
class
feat.data.
Fex
(*args, **kwargs)¶ Bases:
pandas.core.frame.DataFrame
Fex is a class to represent facial expression (Fex) data
Fex class is an enhanced pandas dataframe, with extra attributes and methods to help with facial expression data analysis.
- Args:
filename: (str, optional) path to file detector: (str, optional) name of software used to extract Fex. (Feat, FACET, OpenFace, or Affectiva) sampling_freq (float, optional): sampling rate of each row in Hz; defaults to None features (pd.Dataframe, optional): features that correspond to each Fex row sessions: Unique values indicating rows associated with a specific session (e.g., trial, subject, etc).Must be a 1D array of n_samples elements; defaults to None
-
append
(data, session_id=None, axis=0)¶ Append a new Fex object to an existing object
- Args:
data: (Fex) Fex instance to append session_id: session label axis: ([0,1]) Axis to append. Rows=0, Cols=1
- Returns:
Fex instance
-
aus
()¶ Returns the Action Units data
Returns Action Unit data using the columns set in fex.au_columns.
- Returns:
DataFrame: Action Units data
-
baseline
(baseline='median', normalize=None, ignore_sessions=False)¶ Reference a Fex object to a baseline.
- Args:
method: {‘median’, ‘mean’, ‘begin’, FexSeries instance}. Will subtract baseline from Fex object (e.g., mean, median). If passing a Fex object, it will treat that as the baseline. normalize: (str). Can normalize results of baseline. Values can be [None, ‘db’,’pct’]; default None. ignore_sessions: (bool) If True, will ignore Fex.sessions information. Otherwise, method will be applied separately to each unique session.
- Returns:
Fex object
-
calc_pspi
()¶
-
clean
(detrend=True, standardize=True, confounds=None, low_pass=None, high_pass=None, ensure_finite=False, ignore_sessions=False, *args, **kwargs)¶ Clean Time Series signal
This function wraps nilearn functionality and can filter, denoise, detrend, etc.
See http://nilearn.github.io/modules/generated/nilearn.signal.clean.html
This function can do several things on the input signals, in the following order:
detrend
standardize
remove confounds
low- and high-pass filter
If Fex.sessions is not None, sessions will be cleaned separately.
- Args:
confounds: (numpy.ndarray, str or list of Confounds timeseries) Shape must be (instant number, confound number), or just (instant number,). The number of time instants in signals and confounds must be identical (i.e. signals.shape[0] == confounds.shape[0]). If a string is provided, it is assumed to be the name of a csv file containing signals as columns, with an optional one-line header. If a list is provided, all confounds are removed from the input signal, as if all were in the same array. low_pass: (float) low pass cutoff frequencies in Hz. high_pass: (float) high pass cutoff frequencies in Hz. detrend: (bool) If detrending should be applied on timeseries (before confound removal) standardize: (bool) If True, returned signals are set to unit variance. ensure_finite: (bool) If True, the non-finite values (NANs and infs) found in the data will be replaced by zeros. ignore_sessions: (bool) If True, will ignore Fex.sessions information. Otherwise, method will be applied separately to each unique session.
- Returns:
cleaned Fex instance
-
decompose
(algorithm='pca', axis=1, n_components=None, *args, **kwargs)¶ Decompose Fex instance
- Args:
algorithm: (str) Algorithm to perform decomposition types=[‘pca’,’ica’,’nnmf’,’fa’] axis: dimension to decompose [0,1] n_components: (int) number of components. If None then retain as many as possible.
- Returns:
output: a dictionary of decomposition parameters
-
design
()¶ Returns the design data
Returns the study design information using columns in fex.design_columns.
- Returns:
DataFrame: time data
-
distance
(method='euclidean', **kwargs)¶ Calculate distance between rows within a Fex() instance.
- Args:
- method: type of distance metric (can use any scikit learn or
sciypy metric)
- Returns:
dist: Outputs a 2D distance matrix.
-
downsample
(target, **kwargs)¶ - Downsample Fex columns. Relies on nltools.stats.downsample,
but ensures that returned object is a Fex object.
- Args:
- target(float): downsampling target, typically in samples not
seconds
kwargs: additional inputs to nltools.stats.downsample
-
emotions
()¶ Returns the emotion data
Returns emotions data using the columns set in fex.emotion_columns.
- Returns:
DataFrame: emotion data
-
extract_boft
(min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)¶ Extract Bag of Temporal features
- Args:
min_freq: maximum frequency of temporal filters max_freq: minimum frequency of temporal filters bank: number of temporal filter banks, filters are on exponential scale
- Returns:
wavs: list of Morlet wavelets with corresponding freq hzs: list of hzs for each Morlet wavelet
-
extract_max
(ignore_sessions=False, *args, **kwargs)¶ Extract maximum of each feature
- Args:
ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
fex: (Fex) maximum values for each feature
-
extract_mean
(ignore_sessions=False, *args, **kwargs)¶ Extract mean of each feature
- Args:
ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
Fex: mean values for each feature
-
extract_min
(ignore_sessions=False, *args, **kwargs)¶ Extract minimum of each feature
- Args:
ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
Fex: (Fex) minimum values for each feature
-
extract_multi_wavelet
(min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)¶ Convolve with a bank of morlet wavelets.
Wavelets are equally spaced from min to max frequency. See extract_wavelet for more information and options.
- Args:
min_freq: (float) minimum frequency to extract max_freq: (float) maximum frequency to extract bank: (int) size of wavelet bank num_cyc: (float) number of cycles for wavelet mode: (str) feature to extract, e.g., [‘complex’,’filtered’,’phase’,’magnitude’,’power’] ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
convolved: (Fex instance)
-
extract_summary
(mean=True, max=True, min=True, ignore_sessions=False, *args, **kwargs)¶ Extract summary of multiple features
- Args:
mean: (bool) extract mean of features max: (bool) extract max of features min: (bool) extract min of features ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
fex: (Fex)
-
extract_wavelet
(freq, num_cyc=3, mode='complex', ignore_sessions=False)¶ Perform feature extraction by convolving with a complex morlet wavelet
- Args:
freq: (float) frequency to extract num_cyc: (float) number of cycles for wavelet mode: (str) feature to extract, e.g., ‘complex’,’filtered’,’phase’,’magnitude’,’power’] ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
convolved: (Fex instance)
-
facebox
()¶ Returns the facebox data
Returns the facebox data using fex.facebox_columns.
- Returns:
DataFrame: facebox data
-
info
()¶ Print all meta data of fex
Loops through metadata set in self._metadata and prints out the information.
-
input
()¶ Returns input column as string
Returns input data in the “input” column.
- Returns:
string: path to input image
-
isc
(col, index='frame', columns='input', method='pearson')¶ [summary]
- Args:
col (str]): Column name to compute the ISC for. index (str, optional): Column to be used in computing ISC. Usually this would be the column identifying the time such as the number of the frame. Defaults to “frame”. columns (str, optional): Column to be used for ISC. Usually this would be the column identifying the video or subject. Defaults to “input”. method (str, optional): Method to use for correlation pearson, kendall, or spearman. Defaults to “pearson”.
- Returns:
DataFrame: Correlation matrix with index as colmns
-
itersessions
()¶ Iterate over Fex sessions as (session, series) pairs.
- Returns:
it: a generator that iterates over the sessions of the fex instance
-
landmark
()¶ Returns the landmark data
Returns landmark data using the columns set in fex.landmark_columns.
- Returns:
DataFrame: landmark data
-
landmark_x
()¶ Returns the x landmarks.
Returns the x-coordinates for facial landmarks looking for “x” in fex.landmark_columns.
- Returns:
DataFrame: x landmarks.
-
landmark_y
()¶ Returns the y landmarks.
Returns the y-coordinates for facial landmarks looking for “y” in fex.landmark_columns.
- Returns:
DataFrame: y landmarks.
-
plot_aus
(row_n, model=None, vectorfield=None, muscles=None, ax=None, color='k', linewidth=1, linestyle='-', gaze=None, *args, **kwargs)¶
-
plot_detections
(draw_landmarks=True, draw_facelines=True, muscle=False)¶ Plots detection results by Feat.
- Args:
draw_landmarks (bool, optional): Whether to draw landmarks. Defaults to True. draw_facelines (bool, optional): Whether to draw face lines. Defaults to True. muscle (bool, optional): Whether to draw muscle activations. Defaults to False.
- Returns:
axes: handle to plot
-
predict
(X, y, model=<class 'sklearn.linear_model._base.LinearRegression'>, *args, **kwargs)¶ Predicts y from X using a sklearn model.
Predict a variable of interest y using your model of choice from X, which can be a list of columns of the Fex instance or a dataframe.
-
read_affectiva
(filename=None, *args, **kwargs)¶ Reads facial expression detection results from Affectiva
- Args:
filename (string, optional): Path to file. Defaults to None.
- Returns:
Fex
-
read_facet
(filename=None, *args, **kwargs)¶ Reads facial expression detection results from FACET
- Args:
filename (string, optional): Path to file. Defaults to None.
- Returns:
Fex
-
read_feat
(filename=None, *args, **kwargs)¶ Reads facial expression detection results from Feat Detector
- Args:
filename (string, optional): Path to file. Defaults to None.
- Returns:
Fex
-
read_file
(*args, **kwargs)¶ Loads file into FEX class
This function checks the detector set in fex.detector and calls the appropriate read function that helps utilize functionalities of Feat.
- Available detectors include:
FACET OpenFace Affectiva Feat
- Returns:
DataFrame: Fex class
-
read_openface
(filename=None, *args, **kwargs)¶ Reads facial expression detection results from OpenFace
- Args:
filename (string, optional): Path to file. Defaults to None.
- Returns:
Fex
-
rectification
(std=3)¶ Removes time points when the face position moved more than N standard deviations from the mean.
- Args:
std (default 3): standard deviation from mean to remove outlier face locations
- Returns:
data: cleaned FEX object
-
regress
(X, y, fit_intercept=True, *args, **kwargs)¶ Regress using nltools.stats.regress.
fMRI-like regression to predict Fex activity (y) from set of regressors (X).
- Args:
X (list or str): Independent variable to predict. y (list or str): Dependent variable to be predicted. fit_intercept (bool): Whether to add intercept before fitting. Defaults to True.
- Returns:
betas, t-stats, p-values, df, residuals
-
time
()¶ Returns the time data
Returns the time information using fex.time_columns.
- Returns:
DataFrame: time data
-
ttest_1samp
(popmean=0, threshold_dict=None)¶ Conducts 1 sample ttest.
Uses scipy.stats.ttest_1samp to conduct 1 sample ttest
- Args:
popmean (int, optional): Population mean to test against. Defaults to 0. threshold_dict ([type], optional): Dictonary for thresholding. Defaults to None. [NOT IMPLEMENTED]
- Returns:
t, p: t-statistics and p-values
-
ttest_ind
(col, sessions, threshold_dict=None)¶ Conducts 2 sample ttest.
Uses scipy.stats.ttest_ind to conduct 2 sample ttest on column col between sessions.
- Args:
col (str): Column names to compare in a t-test between sessions session_names (tuple): tuple of session names stored in Fex.sessions. threshold_dict ([type], optional): Dictonary for thresholding. Defaults to None. [NOT IMPLEMENTED]
- Returns:
t, p: t-statistics and p-values
-
upsample
(target, target_type='hz', **kwargs)¶ - Upsample Fex columns. Relies on nltools.stats.upsample,
but ensures that returned object is a Fex object.
- Args:
- target(float): upsampling target, default ‘hz’ (also ‘samples’,
‘seconds’)
kwargs: additional inputs to nltools.stats.upsample
-
class
feat.data.
FexSeries
(*args, **kwargs)¶ Bases:
pandas.core.series.Series
This is a sub-class of pandas series. While not having additional methods of it’s own required to retain normal slicing functionality for the Fex class, i.e. how slicing is typically handled in pandas. All methods should be called on Fex below.
-
aus
()¶ Returns the Action Units data
- Returns:
DataFrame: Action Units data
-
design
()¶ Returns the design data
- Returns:
DataFrame: time data
-
emotions
()¶ Returns the emotion data
- Returns:
DataFrame: emotion data
-
facebox
()¶ Returns the facebox data
- Returns:
DataFrame: facebox data
-
info
()¶ Print class meta data.
-
input
()¶ Returns input column as string
- Returns:
string: path to input image
-
landmark
()¶ Returns the landmark data
- Returns:
DataFrame: landmark data
-
landmark_x
()¶ Returns the x landmarks.
- Returns:
DataFrame: x landmarks.
-
landmark_y
()¶ Returns the y landmarks.
- Returns:
DataFrame: y landmarks.
-
time
()¶ Returns the time data
- Returns:
DataFrame: time data
-
-
class
feat.data.
Fextractor
¶ Bases:
object
Fextractor is a class that extracts and merges features from a Fex instance in preparation for data analysis.
-
boft
(fex_object, min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)¶ Extract Bag of Temporal features
- Args:
fex_object: (Fex) Fex instance to extract features from. min_freq: maximum frequency of temporal filters max_freq: minimum frequency of temporal filters bank: number of temporal filter banks, filters are on exponential scale
- Returns:
wavs: list of Morlet wavelets with corresponding freq hzs: list of hzs for each Morlet wavelet
-
max
(fex_object, ignore_sessions=False, *args, **kwargs)¶ Extract maximum of each feature
- Args:
fex_object: (Fex) Fex instance to extract features from. ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
Fex: (Fex) maximum values for each feature
-
mean
(fex_object, ignore_sessions=False, *args, **kwargs)¶ Extract mean of each feature
- Args:
fex_object: (Fex) Fex instance to extract features from. ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
Fex: mean values for each feature
-
merge
(out_format='long')¶ Merge all extracted features to a single dataframe
- Args:
format: (str) Output format of merged data. Can be set to ‘long’ or ‘wide’. Defaults to long.
- Returns:
merged: (DataFrame) DataFrame containing merged features extracted from a Fex instance.
-
min
(fex_object, ignore_sessions=False, *args, **kwargs)¶ Extract minimum of each feature
- Args:
fex_object: (Fex) Fex instance to extract features from. ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
Fex: (Fex) minimum values for each feature
-
multi_wavelet
(fex_object, min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)¶ Convolve with a bank of morlet wavelets.
Wavelets are equally spaced from min to max frequency. See extract_wavelet for more information and options.
- Args:
fex_object: (Fex) Fex instance to extract features from. min_freq: (float) minimum frequency to extract max_freq: (float) maximum frequency to extract bank: (int) size of wavelet bank num_cyc: (float) number of cycles for wavelet mode: (str) feature to extract, e.g., [‘complex’,’filtered’,’phase’,’magnitude’,’power’] ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
convolved: (Fex instance)
-
summary
(fex_object, mean=False, max=False, min=False, ignore_sessions=False, *args, **kwargs)¶ Extract summary of multiple features Args:
fex_object: (Fex) Fex instance to extract features from. mean: (bool) extract mean of features max: (bool) extract max of features min: (bool) extract min of features ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
fex: (Fex)
-
wavelet
(fex_object, freq, num_cyc=3, mode='complex', ignore_sessions=False)¶ - Args:
fex_object: (Fex) Fex instance to extract features from. freq: (float) frequency to extract num_cyc: (float) number of cycles for wavelet mode: (str) feature to extract, e.g., [‘complex’,’filtered’,’phase’,’magnitude’,’power’] ignore_sessions: (bool) ignore sessions or extract separately by sessions if available.
- Returns:
convolved: (Fex instance)
-
feat.plotting module¶
-
feat.plotting.
draw_lineface
(currx, curry, ax=None, color='k', linestyle='-', linewidth=1, gaze=None, *args, **kwargs)¶ Plot Line Face
- Args:
currx: vector (len(68)) of x coordinates curry: vector (len(68)) of y coordinates ax: matplotlib axis to add color: matplotlib line color linestyle: matplotlib linestyle linewidth: matplotlib linewidth gaze: array (len(4)) of gaze vectors (fifth value is whether to draw vectors)
-
feat.plotting.
draw_muscles
(currx, curry, au=None, ax=None, *args, **kwargs)¶ Draw Muscles
- Args:
currx: vector (len(68)) of x coordinates curry: vector (len(68)) of y coordinates ax: matplotlib axis to add
-
feat.plotting.
draw_vectorfield
(reference, target, color='r', scale=1, width=0.007, ax=None, *args, **kwargs)¶ Draw vectorfield from reference to target
- Args:
reference: reference landmarks (2,68) target: target landmarks (2,68) ax: matplotlib axis instance au: vector of action units (len(17))
-
feat.plotting.
get_heat
(muscle, au, log)¶ Function to create heatmap from au vector
- Args:
muscle (string): string representation of a muscle au (list): vector of action units log (boolean): whether the action unit values are on a log scale
- Returns:
color of muscle according to its au value
-
feat.plotting.
plot_face
(model=None, au=None, vectorfield=None, muscles=None, ax=None, feature_range=False, color='k', linewidth=1, linestyle='-', gaze=None, *args, **kwargs)¶ Function to plot facesself
- Args:
model: sklearn PLSRegression instance au: vector of action units (same length as model.n_components) vectorfield: (dict) {‘target’:target_array,’reference’:reference_array} muscles: (dict) {‘muscle’: color} ax: matplotlib axis handle feature_range (tuple, default: None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction. color: matplotlib color linewidth: matplotlib linewidth linestyle: matplotlib linestyle gaze: array of gaze vectors (len(4))
- Returns:
ax: plot handle
-
feat.plotting.
predict
(au, model=None, feature_range=None)¶ Helper function to predict landmarks from au given a sklearn model
- Args:
au: vector of action unit intensities model: sklearn pls object (uses pretrained model by default) feature_range (tuple, default: None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.
- Returns:
landmarks: Array of landmarks (2,68)
feat.utils module¶
-
feat.utils.
get_resource_path
()¶ Get path to feat resource directory.
-
feat.utils.
load_h5
(file_name='pyfeat_aus_to_landmarks.h5')¶ Load the h5 PLS model for plotting.
- Args:
file_name (str, optional): Specify model to load.. Defaults to ‘blue.h5’.
- Returns:
model: PLS model
-
feat.utils.
read_affectiva
(affectivafile, orig_cols=False)¶ This function reads in affectiva file processed through the https://github.com/cosanlab/affectiva-api-app.
- Args:
affectivafile: file to read orig_cols: If True, convert original colnames to FACS names
- Returns:
Fex of processed facial expressions
-
feat.utils.
read_facet
(facetfile, features=None, raw=False, sampling_freq=None)¶ This function reads in an iMotions-FACET exported facial expression file.
- Args:
facetfile: iMotions-FACET file. Files from iMotions 5, 6, and 7 have been tested and supported features: If a list of iMotion-FACET column names are passed, those are returned. Otherwise, default columns are returned in the following format:[‘Timestamp’,’FaceRectX’,’FaceRectY’,’FaceRectWidth’,’FaceRectHeight’, ‘Joy’,’Anger’,’Surprise’,’Fear’,’Contempt’, ‘Disgust’,’Sadness’,’Confusion’,’Frustration’, ‘Neutral’,’Positive’,’Negative’,’AU1’,’AU2’, ‘AU4’,’AU5’,’AU6’,’AU7’,’AU9’,’AU10’, ‘AU12’,’AU14’,’AU15’,’AU17’,’AU18’,’AU20’, ‘AU23’,’AU24’,’AU25’,’AU26’,’AU28’,’AU43’, ‘Yaw’, ‘Pitch’, ‘Roll’]. Note that these column names are different from the original files which has ‘ Evidence’, ‘ Degrees’ appended to each column. raw (default=False): Set to True to return all columns without processing. sampling_freq: sampling frequency to pass to Fex
- Returns:
dataframe of processed facial expressions
-
feat.utils.
read_openface
(openfacefile, features=None)¶ This function reads in an OpenFace exported facial expression file. Args:
features: If a list of column names are passed, those are returned. Otherwise, default returns the following features: [‘frame’, ‘timestamp’, ‘confidence’, ‘success’, ‘gaze_0_x’,
‘gaze_0_y’, ‘gaze_0_z’, ‘gaze_1_x’, ‘gaze_1_y’, ‘gaze_1_z’, ‘pose_Tx’, ‘pose_Ty’, ‘pose_Tz’, ‘pose_Rx’, ‘pose_Ry’, ‘pose_Rz’, ‘x_0’, ‘x_1’, ‘x_2’, ‘x_3’, ‘x_4’, ‘x_5’, ‘x_6’, ‘x_7’, ‘x_8’, ‘x_9’, ‘x_10’, ‘x_11’, ‘x_12’, ‘x_13’, ‘x_14’, ‘x_15’, ‘x_16’, ‘x_17’, ‘x_18’, ‘x_19’, ‘x_20’, ‘x_21’, ‘x_22’, ‘x_23’, ‘x_24’, ‘x_25’, ‘x_26’, ‘x_27’, ‘x_28’, ‘x_29’, ‘x_30’, ‘x_31’, ‘x_32’, ‘x_33’, ‘x_34’, ‘x_35’, ‘x_36’, ‘x_37’, ‘x_38’, ‘x_39’, ‘x_40’, ‘x_41’, ‘x_42’, ‘x_43’, ‘x_44’, ‘x_45’, ‘x_46’, ‘x_47’, ‘x_48’, ‘x_49’, ‘x_50’, ‘x_51’, ‘x_52’, ‘x_53’, ‘x_54’, ‘x_55’, ‘x_56’, ‘x_57’, ‘x_58’, ‘x_59’, ‘x_60’, ‘x_61’, ‘x_62’, ‘x_63’, ‘x_64’, ‘x_65’, ‘x_66’, ‘x_67’, ‘y_0’, ‘y_1’, ‘y_2’, ‘y_3’, ‘y_4’, ‘y_5’, ‘y_6’, ‘y_7’, ‘y_8’, ‘y_9’, ‘y_10’, ‘y_11’, ‘y_12’, ‘y_13’, ‘y_14’, ‘y_15’, ‘y_16’, ‘y_17’, ‘y_18’, ‘y_19’, ‘y_20’, ‘y_21’, ‘y_22’, ‘y_23’, ‘y_24’, ‘y_25’, ‘y_26’, ‘y_27’, ‘y_28’, ‘y_29’, ‘y_30’, ‘y_31’, ‘y_32’, ‘y_33’, ‘y_34’, ‘y_35’, ‘y_36’, ‘y_37’, ‘y_38’, ‘y_39’, ‘y_40’, ‘y_41’, ‘y_42’, ‘y_43’, ‘y_44’, ‘y_45’, ‘y_46’, ‘y_47’, ‘y_48’, ‘y_49’, ‘y_50’, ‘y_51’, ‘y_52’, ‘y_53’, ‘y_54’, ‘y_55’, ‘y_56’, ‘y_57’, ‘y_58’, ‘y_59’, ‘y_60’, ‘y_61’, ‘y_62’, ‘y_63’, ‘y_64’, ‘y_65’, ‘y_66’, ‘y_67’, ‘X_0’, ‘X_1’, ‘X_2’, ‘X_3’, ‘X_4’, ‘X_5’, ‘X_6’, ‘X_7’, ‘X_8’, ‘X_9’, ‘X_10’, ‘X_11’, ‘X_12’, ‘X_13’, ‘X_14’, ‘X_15’, ‘X_16’, ‘X_17’, ‘X_18’, ‘X_19’, ‘X_20’, ‘X_21’, ‘X_22’, ‘X_23’, ‘X_24’, ‘X_25’, ‘X_26’, ‘X_27’, ‘X_28’, ‘X_29’, ‘X_30’, ‘X_31’, ‘X_32’, ‘X_33’, ‘X_34’, ‘X_35’, ‘X_36’, ‘X_37’, ‘X_38’, ‘X_39’, ‘X_40’, ‘X_41’, ‘X_42’, ‘X_43’, ‘X_44’, ‘X_45’, ‘X_46’, ‘X_47’, ‘X_48’, ‘X_49’, ‘X_50’, ‘X_51’, ‘X_52’, ‘X_53’, ‘X_54’, ‘X_55’, ‘X_56’, ‘X_57’, ‘X_58’, ‘X_59’, ‘X_60’, ‘X_61’, ‘X_62’, ‘X_63’, ‘X_64’, ‘X_65’, ‘X_66’, ‘X_67’, ‘Y_0’, ‘Y_1’, ‘Y_2’, ‘Y_3’, ‘Y_4’, ‘Y_5’, ‘Y_6’, ‘Y_7’, ‘Y_8’, ‘Y_9’, ‘Y_10’, ‘Y_11’, ‘Y_12’, ‘Y_13’, ‘Y_14’, ‘Y_15’, ‘Y_16’, ‘Y_17’, ‘Y_18’, ‘Y_19’, ‘Y_20’, ‘Y_21’, ‘Y_22’, ‘Y_23’, ‘Y_24’, ‘Y_25’, ‘Y_26’, ‘Y_27’, ‘Y_28’, ‘Y_29’, ‘Y_30’, ‘Y_31’, ‘Y_32’, ‘Y_33’, ‘Y_34’, ‘Y_35’, ‘Y_36’, ‘Y_37’, ‘Y_38’, ‘Y_39’, ‘Y_40’, ‘Y_41’, ‘Y_42’, ‘Y_43’, ‘Y_44’, ‘Y_45’, ‘Y_46’, ‘Y_47’, ‘Y_48’, ‘Y_49’, ‘Y_50’, ‘Y_51’, ‘Y_52’, ‘Y_53’, ‘Y_54’, ‘Y_55’, ‘Y_56’, ‘Y_57’, ‘Y_58’, ‘Y_59’, ‘Y_60’, ‘Y_61’, ‘Y_62’, ‘Y_63’, ‘Y_64’, ‘Y_65’, ‘Y_66’, ‘Y_67’, ‘Z_0’, ‘Z_1’, ‘Z_2’, ‘Z_3’, ‘Z_4’, ‘Z_5’, ‘Z_6’, ‘Z_7’, ‘Z_8’, ‘Z_9’, ‘Z_10’, ‘Z_11’, ‘Z_12’, ‘Z_13’, ‘Z_14’, ‘Z_15’, ‘Z_16’, ‘Z_17’, ‘Z_18’, ‘Z_19’, ‘Z_20’, ‘Z_21’, ‘Z_22’, ‘Z_23’, ‘Z_24’, ‘Z_25’, ‘Z_26’, ‘Z_27’, ‘Z_28’, ‘Z_29’, ‘Z_30’, ‘Z_31’, ‘Z_32’, ‘Z_33’, ‘Z_34’, ‘Z_35’, ‘Z_36’, ‘Z_37’, ‘Z_38’, ‘Z_39’, ‘Z_40’, ‘Z_41’, ‘Z_42’, ‘Z_43’, ‘Z_44’, ‘Z_45’, ‘Z_46’, ‘Z_47’, ‘Z_48’, ‘Z_49’, ‘Z_50’, ‘Z_51’, ‘Z_52’, ‘Z_53’, ‘Z_54’, ‘Z_55’, ‘Z_56’, ‘Z_57’, ‘Z_58’, ‘Z_59’, ‘Z_60’, ‘Z_61’, ‘Z_62’, ‘Z_63’, ‘Z_64’, ‘Z_65’, ‘Z_66’, ‘Z_67’, ‘p_scale’, ‘p_rx’, ‘p_ry’, ‘p_rz’, ‘p_tx’, ‘p_ty’, ‘p_0’, ‘p_1’, ‘p_2’, ‘p_3’, ‘p_4’, ‘p_5’, ‘p_6’, ‘p_7’, ‘p_8’, ‘p_9’, ‘p_10’, ‘p_11’, ‘p_12’, ‘p_13’, ‘p_14’, ‘p_15’, ‘p_16’, ‘p_17’, ‘p_18’, ‘p_19’, ‘p_20’, ‘p_21’, ‘p_22’, ‘p_23’, ‘p_24’, ‘p_25’, ‘p_26’, ‘p_27’, ‘p_28’, ‘p_29’, ‘p_30’, ‘p_31’, ‘p_32’, ‘p_33’, ‘AU01_r’, ‘AU02_r’, ‘AU04_r’, ‘AU05_r’, ‘AU06_r’, ‘AU07_r’, ‘AU09_r’, ‘AU10_r’, ‘AU12_r’, ‘AU14_r’, ‘AU15_r’, ‘AU17_r’, ‘AU20_r’, ‘AU23_r’, ‘AU25_r’, ‘AU26_r’, ‘AU45_r’, ‘AU01_c’, ‘AU02_c’, ‘AU04_c’, ‘AU05_c’, ‘AU06_c’, ‘AU07_c’, ‘AU09_c’, ‘AU10_c’, ‘AU12_c’, ‘AU14_c’, ‘AU15_c’, ‘AU17_c’, ‘AU20_c’, ‘AU23_c’, ‘AU25_c’, ‘AU26_c’, ‘AU28_c’, ‘AU45_c’]
- Returns:
dataframe of processed facial expressions
-
feat.utils.
registration
(face_lms, neutral=array([[37.51499407, 118.99554304], [38.34746726, 135.93119299], [40.77550103, 152.83280453], [44.10928582, 169.12794022], [49.98283172, 184.53328584], [59.18894827, 198.0161361], [70.41509055, 209.2829929], [83.65962788, 217.82577978], [98.67478614, 220.00636722], [113.36502269, 217.35622274], [126.09720342, 208.6155414], [137.37278217, 197.26636201], [146.15109522, 183.95054535], [151.72032547, 168.70328048], [154.90171534, 152.54959547], [157.01705756, 136.07919401], [157.81240022, 119.28714732], [45.87342276, 109.05187535], [53.83702202, 101.43275043], [65.6123153, 99.44649503], [77.49003982, 101.34627038], [88.31833069, 105.66229287], [108.80512998, 105.18583249], [120.18051884, 100.8485088], [131.67122653, 99.22426247], [142.80408737, 101.39810664], [150.09271076, 108.74640334], [98.93955017, 117.16643104], [99.0113979, 128.44882091], [99.09059392, 139.7133518], [99.22411613, 151.32196735], [85.97238779, 158.19140086], [92.20644686, 160.61659752], [98.67862474, 162.56437315], [105.26853264, 160.62509056], [111.14227857, 158.32687793], [59.22833204, 118.63189571], [66.08746862, 114.39263502], [74.66886627, 114.59919006], [81.80683311, 120.00819189], [74.3442616, 121.70551759], [65.72377694, 121.82223252], [114.75228899, 119.90654628], [122.2983238, 114.26349216], [130.61954433, 114.38399043], [137.03708639, 118.48489575], [131.21518765, 121.51217889], [122.97461038, 121.56526096], [75.39827955, 179.40706409], [84.55991401, 176.2145797], [92.90235587, 174.4243212], [98.56534032, 176.06536597], [104.97777373, 174.45766844], [113.11257495, 176.39970964], [121.19973609, 179.19790185], [113.16310624, 185.69051009], [105.26365305, 188.31443911], [98.41771871, 188.96563941], [92.22402827, 188.38538897], [84.05109731, 185.74954658], [79.18422925, 179.80657222], [92.71723171, 179.52017819], [98.52973445, 180.16303655], [105.05932173, 179.42368921], [117.43706438, 179.71092599], [104.90869095, 180.32984592], [98.35933953, 181.1598177], [92.49485175, 180.48994809]]), method='fullface')¶ Register faces to a neutral face.
Affine registration of face landmarks to neutral face.
- Args:
face_lms(array): face landmarks to register with shape (n,136). Columns 0~67 are x coordinates and 68~136 are y coordinates neutral(array): target neutral face array that face_lm will be registered method(str or list): If string, register to all landmarks (‘fullface’, default), or inner parts of face nose,mouth,eyes, and brows (‘inner’). If list, pass landmarks to register to e.g. [27, 28, 29, 30, 36, 39, 42, 45]
- Return:
registered_lms: registered landmarks in shape (n,136)
-
feat.utils.
softmax
(x)¶ Softmax function to change log likelihood evidence values to probabilities. Use with Evidence values from FACET.
- Args:
x: value to softmax
feat.version module¶
Module contents¶
Top-level package for FEAT.