What does input_shape mismatch means - tensorflow2.0

InvalidArgumentError: Incompatible shapes: [6,2,3] vs. [6,1] what does
this means.
what does incompatible shape means
InvalidArgumentErrorTraceback (most recent call last)
< ipython-input-28-c2078c9c10e8> in ()
----> 1 odel.fit(x, y, epochs=500)
4 frames
/usr/local/lib/python2.7/dist-
packages/tensorflow/python/framework/errors_impl.pyc in __exit__(self,
type_arg, value_arg, traceback_arg)
526 None, None,
527 compat.as_text(c_api.TF_Message(self.status.status)),
--> 528 c_api.TF_GetCode(self.status.status))
529 # Delete the underlying status object from memory otherwise it
stays alive
530 # as there is a reference to status from this from the traceback
due to
InvalidArgumentError: Incompatible shapes: [6,2,3] vs. [6,1]
[ ]
> Blockquote
import tensorflow as tf
import numpy as np
from tensorflow import keras
odel = tf.keras.Sequential([keras.layers.Dense(units=3)])
odel.compile(optimizer='sgd', loss='mean_squared_error')
x = np.array([[[1,2],[6,3]],[[3,4],[6,3]],[[4,5],[6,6]],[[5,6],[6,6]],
[[6,7],[6,4]],[[7,8],[6,2]]], dtype=float)
y = np.array([6, 10, 15, 17, 17,17], dtype=float)
odel.fit(x, y, epochs=500)
a=np.array([[2,3]])`enter code here`
print(odel.predict(a))

There are lots of things that needs to be addressed in the code
1- InputShape
The first layer of tf.Sequential should have an InputShape. It is the shape of one element of the features
2 - the label
The label is at least of two dimensions (likewise for the features). y is of shape [6], whereas it should be of shape [b, 3], 3 being the value of unit of the last layer and b is the number of batches. Looking at the features, b is 6

Related

Set thresholds in PySpark multinomial logistic regression

I would like to perform a multinomial logistic regression but I can't set threshold and thresholds parameters correctly. Consider the following DF:
from pyspark.ml.linalg import DenseVector
test_train_df = (
sqlc
.createDataFrame([(0, DenseVector([-1.0, 1.2, 0.7])),
(0, DenseVector([3.1, -2.0, -2.9])),
(1, DenseVector([1.0, 0.8, 0.3])),
(1, DenseVector([4.2, 1.4, -1.7])),
(0, DenseVector([-1.9, 2.5, -2.3])),
(2, DenseVector([2.6, -0.2, 0.2])),
(1, DenseVector([0.3, -3.4, 1.8])),
(2, DenseVector([-1.0, -3.5, 4.7]))],
['label', 'features'])
)
My label has 3 classes, so I have to set thresholds (plural, which default is None) rather than threshold (singular, which default is 0.5). Then I write:
from pyspark.ml import classification as cl
test_logit_abst = (
cl.LogisticRegression()
.setFamily('multinomial')
.setThresholds([.5, .5, .5])
)
Then I would like to fit the model on my DF:
test_logit = test_logit_abst.fit(test_train_df)
but when executing this last command I get an error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
~/anaconda3/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
318 "An error occurred while calling {0}{1}{2}.\n".
--> 319 format(target_id, ".", name), value)
320 else:
Py4JJavaError: An error occurred while calling o3769.fit.
: java.lang.IllegalArgumentException: requirement failed: Logistic Regression found inconsistent values for threshold and thresholds. Param threshold is set (0.5), indicating binary classification, but Param thresholds is set with length 3. Clear one Param value to fix this problem.
During handling of the above exception, another exception occurred:
IllegalArgumentException Traceback (most recent call last)
<ipython-input-211-8f3443f41b6b> in <module>()
----> 1 test_logit = test_logit_abst.fit(test_train_df)
~/anaconda3/lib/python3.6/site-packages/pyspark/ml/base.py in fit(self, dataset, params)
62 return self.copy(params)._fit(dataset)
63 else:
---> 64 return self._fit(dataset)
65 else:
66 raise ValueError("Params must be either a param map or a list/tuple of param maps, "
~/anaconda3/lib/python3.6/site-packages/pyspark/ml/wrapper.py in _fit(self, dataset)
263
264 def _fit(self, dataset):
--> 265 java_model = self._fit_java(dataset)
266 return self._create_model(java_model)
267
~/anaconda3/lib/python3.6/site-packages/pyspark/ml/wrapper.py in _fit_java(self, dataset)
260 """
261 self._transfer_params_to_java()
--> 262 return self._java_obj.fit(dataset._jdf)
263
264 def _fit(self, dataset):
~/anaconda3/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id, self.name)
1134
1135 for temp_arg in temp_args:
~/anaconda3/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
77 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
78 if s.startswith('java.lang.IllegalArgumentException: '):
---> 79 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
80 raise
81 return deco
IllegalArgumentException: 'requirement failed: Logistic Regression found inconsistent values for threshold and thresholds. Param threshold is set (0.5), indicating binary classification, but Param thresholds is set with length 3. Clear one Param value to fix this problem.'
The error says threshold is set. This looks strange, as the documentation says that setting thresholds (plural) clears threshold (singular), so that the value 0.5 should be deleted.
So, how to clear threshold since no clearThreshold() exists?
In order to achieve this I tried to clear threshold this way:
logit_abst = (
cl.LogisticRegression()
.setFamily('multinomial')
.setThresholds([.5, .5, .5])
.setThreshold(None)
)
This time the fit command works, I even obtain the model intercept and coefficients:
test_logit.interceptVector
DenseVector([65.6445, 31.6369, -97.2814])
test_logit.coefficientMatrix
DenseMatrix(3, 3, [-76.4534, -19.4797, -79.4949, 12.3659, 4.642, 4.1057, 64.0876, 14.8377, 75.3892], 1)
But if I try to get thresholds (plural) from test_logit_abst I get an error:
test_logit_abst.getThresholds()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-214-fc1c8617ce80> in <module>()
----> 1 test_logit_abst.getThresholds()
~/anaconda3/lib/python3.6/site-packages/pyspark/ml/classification.py in getThresholds(self)
363 if not self.isSet(self.thresholds) and self.isSet(self.threshold):
364 t = self.getOrDefault(self.threshold)
--> 365 return [1.0-t, t]
366 else:
367 return self.getOrDefault(self.thresholds)
TypeError: unsupported operand type(s) for -: 'float' and 'NoneType'
What does this mean?
As a further detail, curiously (and incomprehensibly to me) inverting the order of the parameters settings produces the first error I posted above:
logit_abst = (
cl.LogisticRegression()
.setFamily('multinomial')
.setThreshold(None)
.setThresholds([.5, .5, .5])
)
Why does changing the order of the "set" instructions change the output as well?
It is a messy situation indeed...
The short answer is:
setThresholds (plural) not clearing the threshold (singular) seems to be a bug
For multinomial classification (i.e. number of classes > 2), setThresholds does not do what you expect (and arguably you don't need it)
If all you need is having some "thresholds" in the "default" value of 0.5, you don't have a problem - simply don't use any relevant argument or setThresholds statement
If you really need to apply different decision thresholds to different classes in multinomial classification, you will have to do it manually, by post-processing the respective probabilities, i.e. the probability column in the transformed dataframe (it works OK though with setThreshold(s) for binary classification)
And now for the long answer...
Let's start with binary classification, adapting the toy data from the docs:
spark.version
# u'2.2.0'
from pyspark.ml.classification import LogisticRegression
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
bdf = sc.parallelize([
Row(label=1.0, features=Vectors.dense(0.0, 5.0)),
Row(label=0.0, features=Vectors.dense(1.0, 2.0)),
blor = LogisticRegression(threshold=0.7, thresholds=[0.3, 0.7])
Row(label=1.0, features=Vectors.dense(2.0, 1.0)),
Row(label=0.0, features=Vectors.dense(3.0, 3.0))]).toDF()
We don't need to set thresholds (plural) here - threshold=0.7 is enough, but it will be useful when illustrating the differences with setThreshold below.
blorModel = blor.fit(bdf) # works OK
blor.getThreshold()
# 0.7
blor.getThresholds()
# [0.3, 0.7]
blorModel.transform(bdf).show(truncate=False) # transform the training data
Here is the result:
+---------+-----+------------------------------------------+----------------------------------------+----------+
|features |label|rawPrediction |probability |prediction|
+---------+-----+------------------------------------------+----------------------------------------+----------+
|[0.0,5.0]|1.0 |[-1.138455151184087,1.138455151184087] |[0.242604109995602,0.757395890004398] |1.0 |
|[1.0,2.0]|0.0 |[-0.6056346859838877,0.6056346859838877] |[0.35305562698104337,0.6469443730189567]|0.0 |
|[2.0,1.0]|1.0 |[0.26586039040308496,-0.26586039040308496]|[0.5660763559614698,0.4339236440385302] |0.0 |
|[3.0,3.0]|0.0 |[1.6453673835702176,-1.6453673835702176] |[0.8382639556951765,0.16173604430482344]|0.0 |
+---------+-----+------------------------------------------+----------------------------------------+----------+
What is the meaning of thresholds=[0.3, 0.7]? The answer lies in the 2nd row, where the prediction is 0.0, despite the fact that the the probability is higher for 1.0 (0.65): 0.65 is indeed higher that 0.35, but it is lower than the threshold we have set for this class (0.7), hence it is not classified as such.
Let's now try the seemingly identical operation, but with setThreshold(s) instead:
blor2 = (LogisticRegression()
.setThreshold(0.7)
.setThresholds([0.3, 0.7]) ) # works OK
blorModel2 = blor2.fit(bdf)
[...]
IllegalArgumentException: u'requirement failed: Logistic Regression getThreshold found inconsistent values for threshold (0.5) and thresholds (equivalent to 0.7)'
Nice, eh?
setThresholds (plural) seems indeed to have cleared our value of threshold (0.7) set in the previous line, as claimed in the docs, but it seemingly did so only to restore it to its default value of 0.5...
Omitting .setThreshold(0.7) gives the first error you report yourself (not shown).
Inverting the order of the parameter settings resolves the issue (!!!) and, moreover, renders both getThreshold (singular) and getThresholds (plural) operational (in contrast with your case):
blor2 = (LogisticRegression()
.setThresholds([0.3, 0.7])
.setThreshold(0.7) )
blorModel2 = blor2.fit(bdf) # works OK
blor2.getThreshold()
# 0.7
blor2.getThresholds()
# [0.30000000000000004, 0.7]
Let's move now to the multinomial case; we'll stick again to the example in the docs, with data from the Spark Github repo (they should also be available locally, in your $SPARK_HOME/data/mllib/sample_multiclass_classification_data.txt, but I am working on a Databricks notebook); it is a 3-class case, with labels in {0.0, 1.0, 2.0}.
data_path ="/FileStore/tables/sample_multiclass_classification_data.txt"
mdf = spark.read.format("libsvm").load(data_path)
Similarly with the binary case above, where the elements of our thresholds (plural) sum up to 1, let's ask for a threshold of 0.8 for class 2:
mlor = (LogisticRegression()
.setFamily("multinomial")
.setThresholds([0, 0.2, 0.8])
.setThreshold(0.8) )
mlorModel= mlor.fit(mdf) # works OK
mlor.getThreshold()
# 0.8
mlor.getThresholds()
# [0.19999999999999996, 0.8]
Looks fine, but let's ask for a prediction in the (training) dataset:
mlorModel.transform(mdf).show(truncate=False)
I have singled out only one row - it should be the 2nd from the end of the full output:
+-----+----------------------------------------------------+---------------------------------------------------------+---------------------------------------------------------------+----------+
|label|features |rawPrediction |probability |prediction|
+-----+----------------------------------------------------+---------------------------------------------------------+---------------------------------------------------------------+----------+
[...]
|0.0 |(4,[0,1,2,3],[0.111111,-0.333333,0.38983,0.166667]) |[36.67790353804905,-74.71196613173531,38.034062593686244]|[0.20486526556822454,8.619113376801409E-50,0.7951347344317755] |2.0 |
[...]
+-----+----------------------------------------------------+---------------------------------------------------------+---------------------------------------------------------------+----------+
Scrolling to the right, you'll see that despite the fact that the prediction for class 2.0 here is below the threshold we have set (0.8), the row is indeed predicted as 2.0 - in contrast with the binary case demonstrated above...
So, what to do? Simply remove all the threshold-related statements; you don't need them - even setFamily is unnecessary, as the algorithm will detect by itself that you have more than 2 classes. This will give identical results with the above:
mlor = LogisticRegression() # works OK - no family, no threshold(s)
To summarize:
In both the binary & multinomial cases, what is actually returned by the algorithm is a vector of probabilities of length equal to the number of classes, with elements summing up to 1.
In the binary case only, Spark allows you to go one step further and not naively selecting the highest probability class as the prediction, but applying a user-defined threshold instead; this setting might be useful e.g. in cases with imbalanced data.
This threshold(s) setting has actually no effect in the multinomial case, where Spark will always return as prediction the class with the highest probability.
Despite the mess in the documentation (about which I have argued elsewhere) and the possibility of some bugs, let me say about (3) that this design choice is not unjustifiable; as it has been nicely argued elsewhere (emphasis in the original):
the statistical component of your exercise ends when you output a probability for each class of your new sample. Choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component.
Although the above argument was made for the binary case, it fully holds for the multinomial one, too...

Tf-Idf Vectorizer with LSTM in Keras Error: Expected LSTM to have 3 dimensions

I'm working on a text classification problem which has ~1M reviews, based on which I have to predict the sentiment. However this is a glimpse of my dataset:
In [3]:df=pd.read_csv('amazon_review.csv')
df.head()
Out [3]:
Review_no reviewText Sentiment
1 I enjoy vintage books and movies so I enjoyed ... Happy
2 This book is a reissue of an old one; the auth... Happy
3 This was a fairly interesting read. It had ol... Happy
4 I'd never read any of the Amy Brewster mysteri... Happy
5 If you like period pieces - clothing, lingo, y... Happy
And this is the code I'm using:
In [10]: X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.33)
In[11]: y_train = y_train.map({"Happy": 1, "Content" : 2, "Unhappy" : 3 })
y_test = y_test.map({"Happy": 1, "Content" : 2, "Unhappy" : 3 })
In [12]: y_train = to_categorical(y_train)
y_test=to_categorical(y_test)
In [13]: all_text=X_train.append(X_test)
In [14]: from sklearn.feature_extraction.text import TfidfVectorizer
word_vectorizer = TfidfVectorizer(
sublinear_tf=True,
strip_accents='unicode',
analyzer='word',
token_pattern=r'\w{1,}',
stop_words='english',
ngram_range=(1, 1),
max_features=20000)
word_vectorizer.fit(all_text)
train_word_features = word_vectorizer.transform(X_train)
test_word_features = word_vectorizer.transform(X_test)
In [15]: train_word_features.shape
Out [15]: (658118,20000)
In [16]: model = Sequential()
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2,input_shape=
(658118,20000)))
model.add(Dense(4, activation='softmax'))
In [17]: model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
In [18]: model.fit(train_word_features, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(test_word_features, y_test))
I'm getting this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-90-46b758e790c5> in <module>()
2 batch_size=batch_size,
3 epochs=15,
----> 4 validation_data=(test_word_features, y_test))
~\Anaconda3\lib\site-packages\keras\models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
958 initial_epoch=initial_epoch,
959 steps_per_epoch=steps_per_epoch,
--> 960 validation_steps=validation_steps)
961
962 def evaluate(self, x, y, batch_size=32, verbose=1,
~\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1572 class_weight=class_weight,
1573 check_batch_axis=False,
-> 1574 batch_size=batch_size)
1575 # Prepare validation data.
1576 do_validation = False
~\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
1405 self._feed_input_shapes,
1406 check_batch_axis=False,
-> 1407 exception_prefix='input')
1408 y = _standardize_input_data(y, self._feed_output_names,
1409 output_shapes,
~\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
139 ' to have ' + str(len(shapes[i])) +
140 ' dimensions, but got array with shape ' +
--> 141 str(array.shape))
142 for j, (dim, ref_dim) in enumerate(zip(array.shape, shapes[i])):
143 if not j and not check_batch_axis:
ValueError: Error when checking input: expected lstm_14_input to have 3 dimensions, but got array with shape (658118, 20000)
So I'm not able to understand where is the problem going. How do I change the dimension?
.to_array causes a memory error because its a huge dataset.
According to the documentation TfidfVectorizer gives you a sparse matrix:
Returns:
X : sparse matrix, [n_samples, n_features]
Tf-idf-weighted document-term matrix.
LSTM are used for sequences, so your data has to be in the form of sequences. Your data is a sparse matrix. Anyway I think you should also take a look into word embeddings, because it makes no sense to use the output of TfidfVectorizer in LSTMs. If you want to use TFIDF, take a look at bag of word models.
Maybe take a look at some blog about sentiment analysis, there are millions about it. example

TypeError in pdf function of powerlaw

This is my original Data:
Firms IndustrySsize
1 3598185 0-4
2 998953 5-9
3 608502 10-19
4 5205640 0-19
5 513179 20-99
6 87563 100-499
7 5806382 0-499
8 19076 500
I converted the column 'IndustrySsize' as below just to see if that would prevent the error I got at the bottom of this issue 'TypeError: '<' not supported between instances of 'str' and 'int''. But actually it did not.
Here is my code:
newDS=removeTotal[['Firms', 'IndustrySize']][:8].astype(float)
I have following table from the above code. I converted to float just in case to check if that works when int and other datatypes but none works.
Firms IndustrySize
1 3598185.0 1.0
2 998953.0 2.0
3 608502.0 3.0
4 5205640.0 4.0
5 513179.0 5.0
6 87563.0 6.0
7 5806382.0 7.0
8 19076.0 8.0
I could generate normal plot with this data.
import matplotlib.pyplot as plt
plt.plot(newDS['Firms'],newDS['IndustrySize'] )
plt.show()
plot is generated okay.
Now if I run
from powerlaw import plot_pdf, Fit, pdf
x, y = pdf(newDS, linear_bins=True)
it generates following error, traceback provided below:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-21-79cc0ba3a245> in <module>()
1 from powerlaw import plot_pdf, Fit, pdf
----> 2 x, y = pdf(newDS)
/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/powerlaw.py in pdf(data, xmin, xmax, linear_bins, **kwargs)
1949
1950
-> 1951 if xmin<1: #To compute the pdf also from the data below x=1, the data, xmax and xmin are rescaled dividing them by xmin.
1952 xmax2=xmax/xmin
1953 xmin2=1
TypeError: '<' not supported between instances of 'str' and 'int'
I also have asked this question here

TypeError using sns.distplot() on dataframe with one row

I'm plotting subsets of a dataframe, and one subset happens to have only one row. This is the only reason I can think of for why it's causing problems. This is what it looks like:
problem_dataframe = prob_df[prob_df['Date']==7]
problem_dataframe.head()
I try to do:
sns.distplot(problem_dataframe['floatTime'])
But I get the error:
TypeError: len() of unsized object
Would someone please tell me what's causing this and how to work around it?
The TypeError is resolved by setting bins=1.
But that uncovers a different error, ValueError: x must be 1D or 2D, which gets triggered by an internal function in Matplotlib's hist(), called _normalize_input():
import pandas as pd
import seaborn as sns
df = pd.DataFrame(['Tue','Feb',7,'15:37:58',2017,15.6196]).T
df.columns = ['Day','Month','Date','Time','Year','floatTime']
sns.distplot(df.floatTime, bins=1)
Output:
ValueError Traceback (most recent call last)
<ipython-input-25-858df405d200> in <module>()
6 df.columns = ['Day','Month','Date','Time','Year','floatTime']
7 df.floatTime.values.astype(float)
----> 8 sns.distplot(df.floatTime, bins=1)
/home/andrew/anaconda3/lib/python3.6/site-packages/seaborn/distributions.py in distplot(a, bins, hist, kde, rug, fit, hist_kws, kde_kws, rug_kws, fit_kws, color, vertical, norm_hist, axlabel, label, ax)
213 hist_color = hist_kws.pop("color", color)
214 ax.hist(a, bins, orientation=orientation,
--> 215 color=hist_color, **hist_kws)
216 if hist_color != color:
217 hist_kws["color"] = hist_color
/home/andrew/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py in inner(ax, *args, **kwargs)
1890 warnings.warn(msg % (label_namer, func.__name__),
1891 RuntimeWarning, stacklevel=2)
-> 1892 return func(ax, *args, **kwargs)
1893 pre_doc = inner.__doc__
1894 if pre_doc is None:
/home/andrew/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py in hist(self, x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs)
6141 x = np.array([[]])
6142 else:
-> 6143 x = _normalize_input(x, 'x')
6144 nx = len(x) # number of datasets
6145
/home/andrew/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py in _normalize_input(inp, ename)
6080 else:
6081 raise ValueError(
-> 6082 "{ename} must be 1D or 2D".format(ename=ename))
6083 if inp.shape[1] < inp.shape[0]:
6084 warnings.warn(
ValueError: x must be 1D or 2D
_normalize_input() was removed from Matplotlib (it looks like sometime last year), so I guess Seaborn is referring to an older version under the hood.
You can see _normalize_input() in this old commit:
def _normalize_input(inp, ename='input'):
"""Normalize 1 or 2d input into list of np.ndarray or
a single 2D np.ndarray.
Parameters
----------
inp : iterable
ename : str, optional
Name to use in ValueError if `inp` can not be normalized
"""
if (isinstance(x, np.ndarray) or
not iterable(cbook.safe_first_element(inp))):
# TODO: support masked arrays;
inp = np.asarray(inp)
if inp.ndim == 2:
# 2-D input with columns as datasets; switch to rows
inp = inp.T
elif inp.ndim == 1:
# new view, single row
inp = inp.reshape(1, inp.shape[0])
else:
raise ValueError(
"{ename} must be 1D or 2D".format(ename=ename))
...
I can't figure out why inp.ndim!=1, though. Performing the same np.asarray().ndim on the input returns 1 as expected:
np.asarray(df.floatTime).ndim # 1
So you're facing a few obstacles if you want to make a single-valued input work with sns.distplot().
Suggested Workaround
Check for a single-element df.floatTime, and if that's the case, just use plt.hist() instead (which is what distplot goes to anyway, along with KDE):
plt.hist(df.floatTime)

GridSearchCV: “TypeError: 'StratifiedKFold' object is not iterable”

I want to perform GridSearchCV in a RandomForestClassifier, but data is not balanced, so I use StratifiedKFold:
from sklearn.model_selection import StratifiedKFold
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
param_grid = {'n_estimators':[10, 30, 100, 300], "max_depth": [3, None],
"max_features": [1, 5, 10], "min_samples_leaf": [1, 10, 25, 50], "criterion": ["gini", "entropy"]}
rfc = RandomForestClassifier()
clf = GridSearchCV(rfc, param_grid=param_grid, cv=StratifiedKFold()).fit(X_train, y_train)
But I get an error:
TypeError Traceback (most recent call last)
<ipython-input-597-b08e92c33165> in <module>()
9 rfc = RandomForestClassifier()
10
---> 11 clf = GridSearchCV(rfc, param_grid=param_grid, cv=StratifiedKFold()).fit(X_train, y_train)
c:\python34\lib\site-packages\sklearn\grid_search.py in fit(self, X, y)
811
812 """
--> 813 return self._fit(X, y, ParameterGrid(self.param_grid))
c:\python34\lib\site-packages\sklearn\grid_search.py in _fit(self, X, y, parameter_iterable)
559 self.fit_params, return_parameters=True,
560 error_score=self.error_score)
--> 561 for parameters in parameter_iterable
562 for train, test in cv)
c:\python34\lib\site-packages\sklearn\externals\joblib\parallel.py in __call__(self, iterable)
756 # was dispatched. In particular this covers the edge
757 # case of Parallel used with an exhausted iterator.
--> 758 while self.dispatch_one_batch(iterator):
759 self._iterating = True
760 else:
c:\python34\lib\site-packages\sklearn\externals\joblib\parallel.py in dispatch_one_batch(self, iterator)
601
602 with self._lock:
--> 603 tasks = BatchedCalls(itertools.islice(iterator, batch_size))
604 if len(tasks) == 0:
605 # No more tasks available in the iterator: tell caller to stop.
c:\python34\lib\site-packages\sklearn\externals\joblib\parallel.py in __init__(self, iterator_slice)
125
126 def __init__(self, iterator_slice):
--> 127 self.items = list(iterator_slice)
128 self._size = len(self.items)
c:\python34\lib\site-packages\sklearn\grid_search.py in <genexpr>(.0)
560 error_score=self.error_score)
561 for parameters in parameter_iterable
--> 562 for train, test in cv)
563
564 # Out is a list of triplet: score, estimator, n_test_samples
TypeError: 'StratifiedKFold' object is not iterable
When I write cv=StratifiedKFold(y_train) I have ValueError: The number of folds must be of Integral type. But when I write `cv=5, it works.
I don't understand what is wrong with StratifiedKFold
I had exactly the same problem. The solution that worked for me is to replace:
from sklearn.grid_search import GridSearchCV
with
from sklearn.model_selection import GridSearchCV
Then it should work fine.
It seems that cv=StratifiedKFold()).fit(X_train, y_train) should be changed to cv=StratifiedKFold()).split(X_train, y_train).
The api changed in the latest version. You used to pass y and now you pass just the number when you create the stratifiedKFold object. You pass the y later.
The problem here is an API change as mentioned in other answers, however the answers could be more explicit.
The cv parameter documentation states:
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy. Possible inputs
for cv are:
None, to use the default 3-fold cross-validation, integer,
to specify the number of folds.
An object to be used as a
cross-validation generator.
An iterable yielding train/test splits.
For integer/None inputs, if y is binary or multiclass, StratifiedKFold
used. If the estimator is a classifier or if y is neither binary nor
multiclass, KFold is used.
So, whatever the cross validation strategy used, all that is needed is to provide the generator using the function split, as suggested:
kfolds = StratifiedKFold(5)
clf = GridSearchCV(estimator, parameters, scoring=qwk, cv=kfolds.split(xtrain,ytrain))
clf.fit(xtrain, ytrain)

Resources