machine learning - How to predict a single new sample after dict-vectorization in python scikit-learn?


Keywords:python  2.7 


Question: 

I am using logistic regression classifier to predict ethnicity class label 0, 1. My data is split into testing and training sample and got dict-vectorized into sparse matrix.

The following is the working codes, where I predict and validate X_train and X_test which are part of the features that got vectorized:

for i in mass[k]:
    df = df_temp # reset df before each loop
    #$$ 
    if 1==1:
    count+=1
    ethnicity_tar = str(i)
    ############################################
    ############################################

    def ethnicity_target(row):
        try:
            if row[ethnicity_var] == ethnicity_tar:
                return 1
            else:
                return 0
        except: return None
    df['ethnicity_scan'] = df.apply(ethnicity_target, axis=1)
    print '1=', ethnicity_tar
    print '0=', 'non-'+ethnicity_tar

    # Random sampling a smaller dataframe for debugging
    rows = df.sample(n=subsample_size, random_state=seed) # Seed gives fixed randomness
    df = DataFrame(rows)
    print 'Class count:'
    print df['ethnicity_scan'].value_counts()

    # Assign X and y variables
    X = df.raw_name.values
    X2 = df.name.values
    X3 = df.gender.values
    X4 = df.location.values
    y = df.ethnicity_scan.values

    # Feature extraction functions
    def feature_full_name(nameString):
        try:
            full_name = nameString
            if len(full_name) > 1: # not accept name with only 1 character
                return full_name
            else: return '?'
        except: return '?'

    def feature_full_last_name(nameString):
        try:
            last_name = nameString.rsplit(None, 1)[-1]
            if len(last_name) > 1: # not accept name with only 1 character
                return last_name
            else: return '?'
        except: return '?'

    def feature_full_first_name(nameString):
        try:
            first_name = nameString.rsplit(' ', 1)[0]
            if len(first_name) > 1: # not accept name with only 1 character
                return first_name
            else: return '?'
        except: return '?'

    # Transform format of X variables, and spit out a numpy array for all features
    my_dict = [{'last-name': feature_full_last_name(i)} for i in X]
    my_dict5 = [{'first-name': feature_full_first_name(i)} for i in X]

    all_dict = []
    for i in range(0, len(my_dict)):
        temp_dict = dict(
            my_dict[i].items() + my_dict5[i].items()
            )
        all_dict.append(temp_dict)

    newX = dv.fit_transform(all_dict)

    # Separate the training and testing data sets
    X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)

    # Fitting X and y into model, using training data
    classifierUsed2.fit(X_train, y_train)

    # Making predictions using trained data
    y_train_predictions = classifierUsed2.predict(X_train)
    y_test_predictions = classifierUsed2.predict(X_test)

However, I would like to predict just a single name for example "John Carter" and predict the ethnicity label. I replaced the y_train_predictions = classifierUsed2.predict(X_train) and y_train_predictions = classifierUsed2.predict(X_train) with the following line but resulting in error:

print classifierUsed2.predict(["John Carter"])

#error
Error: X has 1 features per sample; expecting 103916

1 Answer: 

You need to transform your data in the exact same way as a training one, thus something like (if your input data was just list of strings)

classifierUsed2.predict(dv.transform(["John Carter"]))