prompt stringlengths 105 4.73k | reference_code stringlengths 11 774 | code_context stringlengths 746 120k | problem_id int64 0 999 | library_problem_id int64 0 290 | library class label 7
classes | test_case_cnt int64 0 5 | perturbation_type class label 4
classes | perturbation_origin_id int64 0 289 |
|---|---|---|---|---|---|---|---|---|
Problem:
I have a data frame with one (string) column and I'd like to split it into two (string) columns, with one column header as 'fips' and the other 'row'
My dataframe df looks like this:
row
0 114 AAAAAA
1 514 ENENEN
2 1926 HAHAHA
3 0817 O-O,O-O
4 998244353 TTTTTT
I do not know how to use df.row.str[:] to achi... | def g(df):
return pd.DataFrame(df.row.str.split(' ',1).tolist(), columns = ['fips','row'])
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.DataFrame(df.row.str.split(" ", 1).tolist(), columns=["fips", "row"])
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.Dat... | 200 | 200 | 2Pandas | 2 | 3Surface | 199 |
Problem:
I have a data frame with one (string) column and I'd like to split it into three(string) columns, with one column header as 'fips' ,'medi' and 'row'
My dataframe df looks like this:
row
0 00000 UNITED STATES
1 01000 ALAB AMA
2 01001 Autauga County, AL
3 01003 Baldwin County, AL
4 01005 Barbour County, AL
I... | def g(df):
return pd.DataFrame(df.row.str.split(' ', 2).tolist(), columns=['fips','medi','row'])
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.DataFrame(
df.row.str.split(" ", 2).tolist(), columns=["fips", "medi", "row"]
)
def define_test_input(test_case_id):
if test_case_id... | 201 | 201 | 2Pandas | 2 | 2Semantic | 199 |
Problem:
I have a Dataframe as below.
Name 2001 2002 2003 2004 2005 2006
Name1 2 5 0 0 4 6
Name2 1 4 2 0 4 0
Name3 0 5 0 0 0 2
I wanted to calculate the cumulative average for each row using pandas, But while calculating the Average It has to ignore if the v... | def g(df):
cols = list(df)[1:]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
cnt = min(cnt+1, 2)
s = (s + df.loc[idx, col]) / cnt
df.loc[idx, col] = s
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
cols = list(df)[1:]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
... | 202 | 202 | 2Pandas | 2 | 1Origin | 202 |
Problem:
I have a Dataframe as below.
Name 2001 2002 2003 2004 2005 2006
Name1 2 5 0 0 4 6
Name2 1 4 2 0 4 0
Name3 0 5 0 0 0 2
I wanted to calculate the cumulative average for each row from end to head using pandas, But while calculating the Average It has t... | def g(df):
cols = list(df)[1:]
cols = cols[::-1]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
cnt = min(cnt+1, 2)
s = (s + df.loc[idx, col]) / cnt
df.loc[idx, col] = s
return df
df = g(df.co... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
cols = list(df)[1:]
cols = cols[::-1]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, co... | 203 | 203 | 2Pandas | 2 | 2Semantic | 202 |
Problem:
I have a Dataframe as below.
Name 2001 2002 2003 2004 2005 2006
Name1 2 5 0 0 4 6
Name2 1 4 2 0 4 0
Name3 0 5 0 0 0 2
I wanted to calculate the cumulative average for each row using pandas, But while calculating the Average It has to ignore if the v... | cols = list(df)[1:]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
cnt = min(cnt+1, 2)
s = (s + df.loc[idx, col]) / cnt
df.loc[idx, col] = s
result = df
return result
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
cols = list(df)[1:]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
... | 204 | 204 | 2Pandas | 2 | 3Surface | 202 |
Problem:
I have a Dataframe as below.
Name 2001 2002 2003 2004 2005 2006
Name1 2 5 0 0 4 6
Name2 1 4 2 0 4 0
Name3 0 5 0 0 0 2
I wanted to calculate the cumulative average for each row from end to head using pandas, But while calculating the Average It has t... | def g(df):
cols = list(df)[1:]
cols = cols[::-1]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
s += df.loc[idx, col]
cnt += 1
df.loc[idx, col] = s / (max(cnt, 1))
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
cols = list(df)[1:]
cols = cols[::-1]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, co... | 205 | 205 | 2Pandas | 2 | 0Difficult-Rewrite | 202 |
Problem:
Hi I've read a lot of question here on stackoverflow about this problem, but I have a little different task.
I have this DF:
# DateTime Close
1 2000-01-04 1460
2 2000-01-05 1470
3 2000-01-06 1480
4 2000-01-07 1450
I want to get the difference between each row for Clos... | def g(df):
df['label'] = df.Close.diff().fillna(1).gt(0).astype(int)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["label"] = df.Close.diff().fillna(1).gt(0).astype(int)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFra... | 206 | 206 | 2Pandas | 2 | 1Origin | 206 |
Problem:
Hi I've read a lot of question here on stackoverflow about this problem, but I have a little different task.
I have this DF:
# DateTime Close
1 2000-01-04 1460
2 2000-01-05 1470
3 2000-01-06 1480
4 2000-01-07 1480
5 2000-01-08 1450
I want to get the difference b... | def g(df):
label = [1,]
for i in range(1, len(df)):
if df.loc[i, 'Close'] > df.loc[i-1, 'Close']:
label.append(1)
elif df.loc[i, 'Close'] == df.loc[i-1, 'Close']:
label.append(0)
else:
label.append(-1)
df['label'] = label
return df
df = g(df.c... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
label = [
1,
]
for i in range(1, len(df)):
if df.loc[i, "Close"] > df.loc[i - 1, "Close"]:
label.append(1)
... | 207 | 207 | 2Pandas | 2 | 2Semantic | 206 |
Problem:
Hi I've read a lot of question here on stackoverflow about this problem, but I have a little different task.
I have this DF:
# DateTime Close
1 2000-01-04 1460
2 2000-01-05 1470
3 2000-01-06 1480
4 2000-01-07 1480
5 2000-01-08 1450
I want to get the difference b... | def g(df):
label = []
for i in range(len(df)-1):
if df.loc[i, 'Close'] > df.loc[i+1, 'Close']:
label.append(1)
elif df.loc[i, 'Close'] == df.loc[i+1, 'Close']:
label.append(0)
else:
label.append(-1)
label.append(1)
df['label'] = label
df["D... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
label = []
for i in range(len(df) - 1):
if df.loc[i, "Close"] > df.loc[i + 1, "Close"]:
label.append(1)
elif df.loc[i, "Close"]... | 208 | 208 | 2Pandas | 2 | 0Difficult-Rewrite | 206 |
Problem:
I have the following datatype:
id=["Train A","Train A","Train A","Train B","Train B","Train B"]
arrival_time = ["0"," 2016-05-19 13:50:00","2016-05-19 21:25:00","0","2016-05-24 18:30:00","2016-05-26 12:15:00"]
departure_time = ["2016-05-19 08:25:00","2016-05-19 16:00:00","2016-05-20 07:45:00","2016-05-24 12:50... | import numpy as np
def g(df):
df['arrival_time'] = pd.to_datetime(df['arrival_time'].replace('0', np.nan))
df['departure_time'] = pd.to_datetime(df['departure_time'])
df['Duration'] = df['arrival_time'] - df.groupby('id')['departure_time'].shift()
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["arrival_time"] = pd.to_datetime(df["arrival_time"].replace("0", np.nan))
df["departure_time"] = pd.to_datetime(df["departure_time"])
df["Duration"] = df["a... | 209 | 209 | 2Pandas | 2 | 1Origin | 209 |
Problem:
I have the following datatype:
id=["Train A","Train A","Train A","Train B","Train B","Train B"]
arrival_time = ["0"," 2016-05-19 13:50:00","2016-05-19 21:25:00","0","2016-05-24 18:30:00","2016-05-26 12:15:00"]
departure_time = ["2016-05-19 08:25:00","2016-05-19 16:00:00","2016-05-20 07:45:00","2016-05-24 12:50... | import numpy as np
def g(df):
df['arrival_time'] = pd.to_datetime(df['arrival_time'].replace('0', np.nan))
df['departure_time'] = pd.to_datetime(df['departure_time'])
df['Duration'] = (df['arrival_time'] - df.groupby('id')['departure_time'].shift()).dt.total_seconds()
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["arrival_time"] = pd.to_datetime(df["arrival_time"].replace("0", np.nan))
df["departure_time"] = pd.to_datetime(df["departure_time"])
df["Duration"] = (
... | 210 | 210 | 2Pandas | 2 | 2Semantic | 209 |
Problem:
I have the following datatype:
id=["Train A","Train A","Train A","Train B","Train B","Train B"]
arrival_time = ["0"," 2016-05-19 13:50:00","2016-05-19 21:25:00","0","2016-05-24 18:30:00","2016-05-26 12:15:00"]
departure_time = ["2016-05-19 08:25:00","2016-05-19 16:00:00","2016-05-20 07:45:00","2016-05-24 12:50... | import numpy as np
def g(df):
df['arrival_time'] = pd.to_datetime(df['arrival_time'].replace('0', np.nan))
df['departure_time'] = pd.to_datetime(df['departure_time'])
df['Duration'] = (df['arrival_time'] - df.groupby('id')['departure_time'].shift()).dt.total_seconds()
df["arrival_time"] = df["arrival_ti... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["arrival_time"] = pd.to_datetime(df["arrival_time"].replace("0", np.nan))
df["departure_time"] = pd.to_datetime(df["departure_time"])
df["Duration"] = (
... | 211 | 211 | 2Pandas | 2 | 0Difficult-Rewrite | 209 |
Problem:
I have the following dataframe:
key1 key2
0 a one
1 a two
2 b one
3 b two
4 a one
5 c two
Now, I want to group the dataframe by the key1 and count the column key2 with the value "one" to get this result:
key1 count
0 a 2
1 b 1
2 c 0
I just get the u... | def g(df):
return df.groupby('key1')['key2'].apply(lambda x: (x=='one').sum()).reset_index(name='count')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return (
df.groupby("key1")["key2"]
.apply(lambda x: (x == "one").sum())
.reset_index(name="count")
)
def define_test_input(te... | 212 | 212 | 2Pandas | 2 | 1Origin | 212 |
Problem:
I have the following dataframe:
key1 key2
0 a one
1 a two
2 b one
3 b two
4 a one
5 c two
Now, I want to group the dataframe by the key1 and count the column key2 with the value "two" to get this result:
key1 count
0 a 1
1 b 1
2 c 1
I just get the u... | def g(df):
return df.groupby('key1')['key2'].apply(lambda x: (x=='two').sum()).reset_index(name='count')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return (
df.groupby("key1")["key2"]
.apply(lambda x: (x == "two").sum())
.reset_index(name="count")
)
def define_test_input(te... | 213 | 213 | 2Pandas | 1 | 2Semantic | 212 |
Problem:
I have the following dataframe:
key1 key2
0 a one
1 a two
2 b gee
3 b two
4 a three
5 c two
Now, I want to group the dataframe by the key1 and count the column key2 with the value with "e" as end to get this result:
key1 count
0 a 2
1 b 1
2 c 0
I ju... | def g(df):
return df.groupby('key1')['key2'].apply(lambda x: x.str.endswith('e').sum()).reset_index(name='count')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return (
df.groupby("key1")["key2"]
.apply(lambda x: x.str.endswith("e").sum())
.reset_index(name="count")
)
def define_test_i... | 214 | 214 | 2Pandas | 1 | 0Difficult-Rewrite | 212 |
Problem:
How do I get the min and max Dates from a dataframe's major axis?
value
Date
2014-03-13 10000.000
2014-03-21 2000.000
2014-03-27 2000.000
2014-03-17 200.000
2014-03-17 5.000
2014-03-17 70.000
2014-03-21 200.000
2014-03-27 5.0... | def g(df):
return df.index.max(), df.index.min()
max_result,min_result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.index.max(), df.index.min()
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"value": [10000, ... | 215 | 215 | 2Pandas | 2 | 1Origin | 215 |
Problem:
How do I get the mode and mediean Dates from a dataframe's major axis?
value
2014-03-13 10000.000
2014-03-21 2000.000
2014-03-27 2000.000
2014-03-17 200.000
2014-03-17 5.000
2014-03-17 70.000
2014-03-21 200.000
2014-03-27 5.000
2014-03-27 25.000
2014-03-27 0.02... | def g(df):
Date = list(df.index)
Date = sorted(Date)
half = len(list(Date)) // 2
return max(Date, key=lambda v: Date.count(v)), Date[half]
mode_result,median_result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
Date = list(df.index)
Date = sorted(Date)
half = len(list(Date)) // 2
return max(Date, key=lambda v: Date.count(v)), Date[half]
def define_test_in... | 216 | 216 | 2Pandas | 2 | 0Difficult-Rewrite | 215 |
Problem:
I am trying to modify a DataFrame df to only contain rows for which the values in the column closing_price are between 99 and 101 and trying to do this with the code below.
However, I get the error
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
a... | def g(df):
return df.query('99 <= closing_price <= 101')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.query("99 <= closing_price <= 101")
def define_test_input(test_case_id):
if test_case_id == 1:
np.random.seed(2)
... | 217 | 217 | 2Pandas | 1 | 1Origin | 217 |
Problem:
I am trying to modify a DataFrame df to only contain rows for which the values in the column closing_price are not between 99 and 101 and trying to do this with the code below.
However, I get the error
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()... | def g(df):
return df.query('closing_price < 99 or closing_price > 101')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.query("closing_price < 99 or closing_price > 101")
def define_test_input(test_case_id):
if test_case_id == 1:
np.random.... | 218 | 218 | 2Pandas | 1 | 2Semantic | 217 |
Problem:
I'm using groupby on a pandas dataframe to drop all rows that don't have the minimum of a specific column. Something like this:
df1 = df.groupby("item", as_index=False)["diff"].min()
However, if I have more than those two columns, the other columns (e.g. otherstuff in my example) get dropped. Can I keep tho... | def g(df):
return df.loc[df.groupby("item")["diff"].idxmin()]
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.loc[df.groupby("item")["diff"].idxmin()]
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
... | 219 | 219 | 2Pandas | 2 | 1Origin | 219 |
Problem:
I have the following kind of strings in my column seen below. I would like to parse out everything after the last _ of each string, and if there is no _ then leave the string as-is. (as my below try will just exclude strings with no _)
so far I have tried below, seen here: Python pandas: remove everything aft... | def g(df):
df['SOURCE_NAME'] = df['SOURCE_NAME'].str.rsplit('_', 1).str.get(0)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["SOURCE_NAME"] = df["SOURCE_NAME"].str.rsplit("_", 1).str.get(0)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
strs ... | 220 | 220 | 2Pandas | 2 | 1Origin | 220 |
Problem:
I have the following kind of strings in my column seen below. I would like to parse out everything before the last _ of each string, and if there is no _ then leave the string as-is. (as my below try will just exclude strings with no _)
so far I have tried below, seen here: Python pandas: remove everything be... | def g(df):
df['SOURCE_NAME'] = df['SOURCE_NAME'].str.rsplit('_', 1).str.get(-1)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["SOURCE_NAME"] = df["SOURCE_NAME"].str.rsplit("_", 1).str.get(-1)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
strs... | 221 | 221 | 2Pandas | 2 | 2Semantic | 220 |
Problem:
I have the following kind of strings in my column seen below. I would like to parse out everything after the last _ of each string, and if there is no _ then leave the string as-is. (as my below try will just exclude strings with no _)
so far I have tried below, seen here: Python pandas: remove everything aft... | df['SOURCE_NAME'] = df['SOURCE_NAME'].str.rsplit('_', 1).str.get(0)
result = df
return result
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["SOURCE_NAME"] = df["SOURCE_NAME"].str.rsplit("_", 1).str.get(0)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
strs ... | 222 | 222 | 2Pandas | 2 | 3Surface | 220 |
Problem:
I have a column ( lets call it Column X) containing around 16000 NaN values. The column has two possible values, 1 or 0 ( so like a binary )
I want to fill the NaN values in column X, but i don't want to use a single value for ALL the NaN entries.
To be precise; I want to fill the first 50% (round down) of NaN... | def g(df):
idx = df['Column_x'].index[df['Column_x'].isnull()]
total_nan_len = len(idx)
first_nan = total_nan_len // 2
df.loc[idx[0:first_nan], 'Column_x'] = 0
df.loc[idx[first_nan:total_nan_len], 'Column_x'] = 1
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
idx = df["Column_x"].index[df["Column_x"].isnull()]
total_nan_len = len(idx)
first_nan = total_nan_len // 2
df.loc[idx[0:first_nan], "Column_x"] = 0
... | 223 | 223 | 2Pandas | 2 | 1Origin | 223 |
Problem:
I have a column ( lets call it Column X) containing around 16000 NaN values. The column has two possible values, 1 or 0 ( so like a binary )
I want to fill the NaN values in column X, but i don't want to use a single value for ALL the NaN entries.
To be precise; I want to fill the first 30% (round down) of NaN... | def g(df):
idx = df['Column_x'].index[df['Column_x'].isnull()]
total_nan_len = len(idx)
first_nan = (total_nan_len * 3) // 10
middle_nan = (total_nan_len * 3) // 10
df.loc[idx[0:first_nan], 'Column_x'] = 0
df.loc[idx[first_nan:first_nan + middle_nan], 'Column_x'] = 0.5
df.loc[idx[first_nan +... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
idx = df["Column_x"].index[df["Column_x"].isnull()]
total_nan_len = len(idx)
first_nan = (total_nan_len * 3) // 10
middle_nan = (total_nan_len * 3) // ... | 224 | 224 | 2Pandas | 2 | 2Semantic | 223 |
Problem:
I have a column ( lets call it Column X) containing around 16000 NaN values. The column has two possible values, 1 or 0 ( so like a binary )
I want to fill the NaN values in column X, but i don't want to use a single value for ALL the NaN entries.
To be precise; I want to fill NaN values with "0" or "1" so tha... | def g(df):
total_len = len(df)
zero_len = (df['Column_x'] == 0).sum()
idx = df['Column_x'].index[df['Column_x'].isnull()]
total_nan_len = len(idx)
first_nan = (total_len // 2) - zero_len
df.loc[idx[0:first_nan], 'Column_x'] = 0
df.loc[idx[first_nan:total_nan_len], 'Column_x'] = 1
return ... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
total_len = len(df)
zero_len = (df["Column_x"] == 0).sum()
idx = df["Column_x"].index[df["Column_x"].isnull()]
total_nan_len = len(idx)
first_n... | 225 | 225 | 2Pandas | 2 | 0Difficult-Rewrite | 223 |
Problem:
i need to create a dataframe containing tuples from a series of dataframes arrays. What I need is the following:
I have dataframes a and b:
a = pd.DataFrame(np.array([[1, 2],[3, 4]]), columns=['one', 'two'])
b = pd.DataFrame(np.array([[5, 6],[7, 8]]), columns=['one', 'two'])
a:
one two
0 1 2
1 3 ... | def g(a,b):
return pd.DataFrame(np.rec.fromarrays((a.values, b.values)).tolist(),columns=a.columns,index=a.index)
result = g(a.copy(),b.copy())
| import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
a, b = data
return pd.DataFrame(
np.rec.fromarrays((a.values, b.values)).tolist(),
columns=a.columns,
index=a... | 226 | 226 | 2Pandas | 2 | 1Origin | 226 |
Problem:
i need to create a dataframe containing tuples from a series of dataframes arrays. What I need is the following:
I have dataframes a and b:
a = pd.DataFrame(np.array([[1, 2],[3, 4]]), columns=['one', 'two'])
b = pd.DataFrame(np.array([[5, 6],[7, 8]]), columns=['one', 'two'])
c = pd.DataFrame(np.array([[9, 10],... | def g(a,b,c):
return pd.DataFrame(np.rec.fromarrays((a.values, b.values, c.values)).tolist(),columns=a.columns,index=a.index)
result = g(a.copy(),b.copy(), c.copy())
| import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
a, b, c = data
return pd.DataFrame(
np.rec.fromarrays((a.values, b.values, c.values)).tolist(),
columns=a.columns,
... | 227 | 227 | 2Pandas | 2 | 2Semantic | 226 |
Problem:
i need to create a dataframe containing tuples from a series of dataframes arrays. What I need is the following:
I have dataframes a and b:
a = pd.DataFrame(np.array([[1, 2],[3, 4]]), columns=['one', 'two'])
b = pd.DataFrame(np.array([[5, 6],[7, 8],[9, 10]]), columns=['one', 'two'])
a:
one two
0 1 2
... | def g(a,b):
if len(a) < len(b):
a = a.append(pd.DataFrame(np.array([[np.nan, np.nan]*(len(b)-len(a))]), columns=a.columns), ignore_index=True)
elif len(a) > len(b):
b = b.append(pd.DataFrame(np.array([[np.nan, np.nan]*(len(a)-len(b))]), columns=a.columns), ignore_index=True)
return pd.DataFr... | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
a, b = data
if len(a) < len(b):
for i in range(len(a), len(b)):
a.loc[i] = [np.nan for _ in range(len(list(a)))]
... | 228 | 228 | 2Pandas | 2 | 0Difficult-Rewrite | 226 |
Problem:
I have a DataFrame that looks like this:
+----------+---------+-------+
| username | post_id | views |
+----------+---------+-------+
| john | 1 | 3 |
| john | 2 | 23 |
| john | 3 | 44 |
| john | 4 | 82 |
| jane | 7 | 5 |
| jane | 8 | 25 |
| jane | 9 | 46 |
| jane | 10 | 56 |
+----------+---------+-------+
a... | def g(df, bins):
groups = df.groupby(['username', pd.cut(df.views, bins)])
return groups.size().unstack()
result = g(df.copy(),bins.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, bins = data
groups = df.groupby(["username", pd.cut(df.views, bins)])
return groups.size().unstack()
def define_test_input(test_case_id):
if... | 229 | 229 | 2Pandas | 2 | 1Origin | 229 |
Problem:
I have a DataFrame and I would like to transform it to count views that belong to certain bins.
example:
+----------+---------+-------+
| username | post_id | views |
+----------+---------+-------+
| john | 1 | 3 |
| john | 2 | 23 |
| john | 3 | 44 |
| john | 4 | 82 |
| jane | 7 | 5 |
| jane | 8 | 25 |
| j... | def g(df, bins):
groups = df.groupby(['username', pd.cut(df.views, bins)])
return groups.size().unstack()
result = g(df.copy(),bins.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, bins = data
groups = df.groupby(["username", pd.cut(df.views, bins)])
return groups.size().unstack()
def define_test_input(test_case_id):
if... | 230 | 230 | 2Pandas | 2 | 3Surface | 229 |
Problem:
I have a DataFrame that looks like this:
+----------+---------+-------+
| username | post_id | views |
+----------+---------+-------+
| tom | 10 | 3 |
| tom | 9 | 23 |
| tom | 8 | 44 |
| tom | 7 | 82 |
| jack | 6 | 5 |
| jack | 5 | 25 |
| jack | 4 | 46 |
| jack | 3 | 56 |
+----------+---------+-------+
and I... | def g(df, bins):
groups = df.groupby(['username', pd.cut(df.views, bins)])
return groups.size().unstack()
result = g(df.copy(),bins.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, bins = data
groups = df.groupby(["username", pd.cut(df.views, bins)])
return groups.size().unstack()
def define_test_input(test_case_id):
if... | 231 | 231 | 2Pandas | 2 | 3Surface | 229 |
Problem:
I have the following dataframe:
text
1 "abc"
2 "def"
3 "ghi"
4 "jkl"
How can I merge these rows into a dataframe with a single row like the following one?
text
1 "abc, def, ghi, jkl"
A:
<code>
import pandas as pd
df = pd.DataFrame({'text': ['abc', 'def', 'ghi', 'jkl']})
</code>
result = ... # put... | def g(df):
return pd.DataFrame({'text': [', '.join(df['text'].str.strip('"').tolist())]})
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.DataFrame({"text": [", ".join(df["text"].str.strip('"').tolist())]})
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.Data... | 232 | 232 | 2Pandas | 2 | 1Origin | 232 |
Problem:
I have the following dataframe:
text
1 "abc"
2 "def"
3 "ghi"
4 "jkl"
How can I merge these rows into a dataframe with a single row like the following one?
text
1 "abc-def-ghi-jkl"
A:
<code>
import pandas as pd
df = pd.DataFrame({'text': ['abc', 'def', 'ghi', 'jkl']})
</code>
result = ... # put sol... | def g(df):
return pd.DataFrame({'text': ['-'.join(df['text'].str.strip('"').tolist())]})
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.DataFrame({"text": ["-".join(df["text"].str.strip('"').tolist())]})
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataF... | 233 | 233 | 2Pandas | 2 | 2Semantic | 232 |
Problem:
I have the following dataframe:
text
1 "abc"
2 "def"
3 "ghi"
4 "jkl"
How can I merge these rows into a dataframe with a single row like the following one?
text
1 "jkl, ghi, def, abc"
A:
<code>
import pandas as pd
df = pd.DataFrame({'text': ['abc', 'def', 'ghi', 'jkl']})
</code>
result = ... # put ... | def g(df):
return pd.DataFrame({'text': [', '.join(df['text'].str.strip('"').tolist()[::-1])]})
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.DataFrame(
{"text": [", ".join(df["text"].str.strip('"').tolist()[::-1])]}
)
def define_test_input(test_case_id):
if test_case_id ==... | 234 | 234 | 2Pandas | 2 | 2Semantic | 232 |
Problem:
I have the following dataframe:
text
1 "abc"
2 "def"
3 "ghi"
4 "jkl"
How can I merge these rows into a dataframe with a single row like the following one Series?
0 abc, def, ghi, jkl
Name: text, dtype: object
A:
<code>
import pandas as pd
df = pd.DataFrame({'text': ['abc', 'def', 'ghi', 'jkl']})
<... | def g(df):
return pd.Series(', '.join(df['text'].to_list()), name='text')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.Series(", ".join(df["text"].to_list()), name="text")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame({"text": [... | 235 | 235 | 2Pandas | 2 | 2Semantic | 232 |
Problem:
I have the following dataframe:
text
1 "abc"
2 "def"
3 "ghi"
4 "jkl"
How can I merge these rows into a dataframe with a single row like the following one Series?
0 jkl-ghi-def-abc
Name: text, dtype: object
A:
<code>
import pandas as pd
df = pd.DataFrame({'text': ['abc', 'def', 'ghi', 'jkl']})
</co... | def g(df):
return pd.Series('-'.join(df['text'].to_list()[::-1]), name='text')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.Series("-".join(df["text"].to_list()[::-1]), name="text")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame({"tex... | 236 | 236 | 2Pandas | 2 | 0Difficult-Rewrite | 232 |
Problem:
I have dfs as follows:
df1:
id city district date value
0 1 bj ft 2019/1/1 1
1 2 bj ft 2019/1/1 5
2 3 sh hp 2019/1/1 9
3 4 sh hp 2019/1/1 13
4 5 sh hp 2019/1/1 17
df2
id date value
0 3 2019/2/1 1
1 4 20... | def g(df1, df2):
return pd.concat([df1,df2.merge(df1[['id','city','district']], how='left', on='id')],sort=False).reset_index(drop=True)
result = g(df1.copy(),df2.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df1, df2 = data
return pd.concat(
[df1, df2.merge(df1[["id", "city", "district"]], how="left", on="id")],
sort=False,
).reset_index(d... | 237 | 237 | 2Pandas | 2 | 1Origin | 237 |
Problem:
I have dfs as follows:
df1:
id city district date value
0 1 bj ft 2019/1/1 1
1 2 bj ft 2019/1/1 5
2 3 sh hp 2019/1/1 9
3 4 sh hp 2019/1/1 13
4 5 sh hp 2019/1/1 17
df2
id date value
0 3 2019/2/1 1
1 4 20... | def g(df1, df2):
df = pd.concat([df1,df2.merge(df1[['id','city','district']], how='left', on='id')],sort=False).reset_index(drop=True)
df['date'] = pd.to_datetime(df['date'])
df['date'] = df['date'].dt.strftime('%d-%b-%Y')
return df.sort_values(by=['id','date']).reset_index(drop=True)
result = g(df1.co... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df1, df2 = data
df = pd.concat(
[df1, df2.merge(df1[["id", "city", "district"]], how="left", on="id")],
sort=False,
).reset_index(dro... | 238 | 238 | 2Pandas | 2 | 0Difficult-Rewrite | 237 |
Problem:
I have dfs as follows:
df1:
id city district date value
0 1 bj ft 2019/1/1 1
1 2 bj ft 2019/1/1 5
2 3 sh hp 2019/1/1 9
3 4 sh hp 2019/1/1 13
4 5 sh hp 2019/1/1 17
df2
id date value
0 3 2019/2/1 1
1 4 20... | def g(df1, df2):
df = pd.concat([df1,df2.merge(df1[['id','city','district']], how='left', on='id')],sort=False).reset_index(drop=True)
return df.sort_values(by=['id','date']).reset_index(drop=True)
result = g(df1.copy(),df2.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df1, df2 = data
df = pd.concat(
[df1, df2.merge(df1[["id", "city", "district"]], how="left", on="id")],
sort=False,
).reset_index(dro... | 239 | 239 | 2Pandas | 2 | 0Difficult-Rewrite | 237 |
Problem:
I have two DataFrames C and D as follows:
C
A B
0 AB 1
1 CD 2
2 EF 3
D
A B
1 CD 4
2 GH 5
I have to merge both the dataframes but the merge should overwrite the values in the right df. Rest of the rows from the dataframe should not change.
Output
A B
0 AB 1
1 CD 4
2 EF 3
3 GH ... | def g(C, D):
return pd.concat([C,D]).drop_duplicates('A', keep='last').sort_values(by=['A']).reset_index(drop=True)
result = g(C.copy(),D.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
C, D = data
return (
pd.concat([C, D])
.drop_duplicates("A", keep="last")
.sort_values(by=["A"])
.reset_index(drop=Tr... | 240 | 240 | 2Pandas | 2 | 1Origin | 240 |
Problem:
I have two DataFrames C and D as follows:
C
A B
0 AB 1
1 CD 2
2 EF 3
D
A B
1 CD 4
2 GH 5
I have to merge both the dataframes but the merge should keep the values in the left df. Rest of the rows from the dataframe should not change.
Output
A B
0 AB 1
1 CD 2
2 EF 3
3 GH 5
Th... | def g(C, D):
return pd.concat([C,D]).drop_duplicates('A', keep='first').sort_values(by=['A']).reset_index(drop=True)
result = g(C.copy(),D.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
C, D = data
return (
pd.concat([C, D])
.drop_duplicates("A", keep="first")
.sort_values(by=["A"])
.reset_index(drop=T... | 241 | 241 | 2Pandas | 2 | 2Semantic | 240 |
Problem:
I have two DataFrames C and D as follows:
C
A B
0 AB 1
1 CD 2
2 EF 3
D
A B
1 CD 4
2 GH 5
I have to merge both the dataframes but the merge should overwrite the values in the right df. Rest of the rows from the dataframe should not change. I want to add a new column 'dulplicated'. If dataf... | def g(C, D):
df = pd.concat([C,D]).drop_duplicates('A', keep='last').sort_values(by=['A']).reset_index(drop=True)
for i in range(len(C)):
if df.loc[i, 'A'] in D.A.values:
df.loc[i, 'dulplicated'] = True
else:
df.loc[i, 'dulplicated'] = False
for i in range(len(C), len... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
C, D = data
df = (
pd.concat([C, D])
.drop_duplicates("A", keep="last")
.sort_values(by=["A"])
.reset_index(drop=True... | 242 | 242 | 2Pandas | 2 | 0Difficult-Rewrite | 240 |
Problem:
I would like to aggregate user transactions into lists in pandas. I can't figure out how to make a list comprised of more than one field. For example,
df = pd.DataFrame({'user':[1,1,2,2,3],
'time':[20,10,11,18, 15],
'amount':[10.99, 4.99, 2.99, 1.99, 10.99]})
which loo... | def g(df):
return df.groupby('user')[['time', 'amount']].apply(lambda x: x.values.tolist())
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.groupby("user")[["time", "amount"]].apply(lambda x: x.values.tolist())
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.Da... | 243 | 243 | 2Pandas | 2 | 1Origin | 243 |
Problem:
I would like to aggregate user transactions into lists in pandas. I can't figure out how to make a list comprised of more than one field. For example,
df = pd.DataFrame({'user':[1,1,2,2,3],
'time':[20,10,11,18, 15],
'amount':[10.99, 4.99, 2.99, 1.99, 10.99]})
which loo... | def g(df):
return df.groupby('user')[['time', 'amount']].apply(lambda x: x.values.tolist()).to_frame(name='amount-time-tuple')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return (
df.groupby("user")[["time", "amount"]]
.apply(lambda x: x.values.tolist())
.to_frame(name="amount-time-tuple")
)
def ... | 244 | 244 | 2Pandas | 2 | 2Semantic | 243 |
Problem:
I would like to aggregate user transactions into lists in pandas. I can't figure out how to make a list comprised of more than one field. For example,
df = pd.DataFrame({'user':[1,1,2,2,3],
'time':[20,10,11,18, 15],
'amount':[10.99, 4.99, 2.99, 1.99, 10.99]})
which loo... | def g(df):
return df.groupby('user')[['time', 'amount']].apply(lambda x: x.values.tolist()[::-1]).to_frame(name='amount-time-tuple')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return (
df.groupby("user")[["time", "amount"]]
.apply(lambda x: x.values.tolist()[::-1])
.to_frame(name="amount-time-tuple")
)
... | 245 | 245 | 2Pandas | 1 | 0Difficult-Rewrite | 243 |
Problem:
I have a pandas series which values are numpy array. For simplicity, say
series = pd.Series([np.array([1,2,3,4]), np.array([5,6,7,8]), np.array([9,10,11,12])], index=['file1', 'file2', 'file3'])
file1 [1, 2, 3, 4]
file2 [5, 6, 7, 8]
file3 [9, 10, 11, 12]
How can I expand it to a da... | def g(s):
return pd.DataFrame.from_records(s.values,index=s.index)
df = g(series.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
s = data
return pd.DataFrame.from_records(s.values, index=s.index)
def define_test_input(test_case_id):
if test_case_id == 1:
series = pd.Series(
... | 246 | 246 | 2Pandas | 2 | 1Origin | 246 |
Problem:
I have a pandas series which values are numpy array. For simplicity, say
series = pd.Series([np.array([1,2,3,4]), np.array([5,6,7,8]), np.array([9,10,11,12])], index=['file1', 'file2', 'file3'])
file1 [1, 2, 3, 4]
file2 [5, 6, 7, 8]
file3 [9, 10, 11, 12]
How can I expand it to a da... | def g(s):
return pd.DataFrame.from_records(s.values,index=s.index).reset_index().rename(columns={'index': 'name'})
df = g(series.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
s = data
return (
pd.DataFrame.from_records(s.values, index=s.index)
.reset_index()
.rename(columns={"index": "name"})
)
def define_test... | 247 | 247 | 2Pandas | 2 | 2Semantic | 246 |
Problem:
I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for 'spike' in column names like 'spike-2', 'hey spike', 'spiked-in' (the 'spike' part is always continuous).
I want the column name to be returned as a string or a var... | def g(df, s):
spike_cols = [col for col in df.columns if s in col and col != s]
return spike_cols
result = g(df.copy(),s)
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, s = data
spike_cols = [col for col in df.columns if s in col and col != s]
return spike_cols
def define_test_input(test_case_id):
if test_ca... | 248 | 248 | 2Pandas | 1 | 1Origin | 248 |
Problem:
I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for 'spike' in column names like 'spike-2', 'hey spike', 'spiked-in' (the 'spike' part is always continuous).
I want the column name to be returned as a string or a var... | def g(df, s):
spike_cols = [col for col in df.columns if s in col and col != s]
return df[spike_cols]
result = g(df.copy(),s)
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, s = data
spike_cols = [col for col in df.columns if s in col and col != s]
return df[spike_cols]
def define_test_input(test_case_id):
if tes... | 249 | 249 | 2Pandas | 1 | 2Semantic | 248 |
Problem:
I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for 'spike' in column names like 'spike-2', 'hey spike', 'spiked-in' (the 'spike' part is always continuous).
I want the column name to be returned as a string or a var... | def g(df, s):
spike_cols = [s for col in df.columns if s in col and s != col]
for i in range(len(spike_cols)):
spike_cols[i] = spike_cols[i]+str(i+1)
result = df[[col for col in df.columns if s in col and col != s]]
result.columns = spike_cols
return result
result = g(df.copy(),s)
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, s = data
spike_cols = [s for col in df.columns if s in col and s != col]
for i in range(len(spike_cols)):
spike_cols[i] = spike_cols[i] + str... | 250 | 250 | 2Pandas | 1 | 0Difficult-Rewrite | 248 |
Problem:
I have a Pandas dataframe that looks like the below:
codes
1 [71020]
2 [77085]
3 [36415]
4 [99213, 99287]
5 [99233, 99233, 99233]
I'm trying to split the lists in df['codes'] into columns, like the below:
... | def g(df):
return df.codes.apply(pd.Series).add_prefix('code_')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.codes.apply(pd.Series).add_prefix("code_")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
... | 251 | 251 | 2Pandas | 2 | 1Origin | 251 |
Problem:
I have a Pandas dataframe that looks like the below:
codes
1 [71020]
2 [77085]
3 [36415]
4 [99213, 99287]
5 [99233, 99233, 99233]
I'm trying to split the lists in df['codes'] into columns, like the below:
... | def g(df):
df = df.codes.apply(pd.Series)
cols = list(df)
for i in range(len(cols)):
cols[i]+=1
df.columns = cols
return df.add_prefix('code_')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df = df.codes.apply(pd.Series)
cols = list(df)
for i in range(len(cols)):
cols[i] += 1
df.columns = cols
return df.add_prefix("code... | 252 | 252 | 2Pandas | 2 | 2Semantic | 251 |
Problem:
I have a Pandas dataframe that looks like the below:
codes
1 [71020]
2 [77085]
3 [36415]
4 [99213, 99287]
5 [99234, 99233, 99233]
I'm trying to sort and split the lists in df['codes'] into columns, like th... | def g(df):
for i in df.index:
df.loc[i, 'codes'] = sorted(df.loc[i, 'codes'])
df = df.codes.apply(pd.Series)
cols = list(df)
for i in range(len(cols)):
cols[i]+=1
df.columns = cols
return df.add_prefix('code_')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
for i in df.index:
df.loc[i, "codes"] = sorted(df.loc[i, "codes"])
df = df.codes.apply(pd.Series)
cols = list(df)
for i in range(len(cols))... | 253 | 253 | 2Pandas | 2 | 0Difficult-Rewrite | 251 |
Problem:
I have a dataframe with one of its column having a list at each index. I want to concatenate these lists into one list. I am using
ids = df.loc[0:index, 'User IDs'].values.tolist()
However, this results in
['[1,2,3,4......]'] which is a string. Somehow each value in my list column is type str. I have tried... | def g(df):
return df.col1.sum()
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.col1.sum()
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(dict(col1=[[1, 2, 3]] * 2))
return df
t... | 254 | 254 | 2Pandas | 1 | 1Origin | 254 |
Problem:
I have a dataframe with one of its column having a list at each index. I want to reversed each list and concatenate these lists into one string like '3,2,1,5,4'. I am using
ids = str(reverse(df.loc[0:index, 'User IDs'].values.tolist()))
However, this results in
'[[1,2,3,4......]]' which is not I want. Somehow... | def g(df):
for i in df.index:
df.loc[i, 'col1'] = df.loc[i, 'col1'][::-1]
L = df.col1.sum()
L = map(lambda x:str(x), L)
return ','.join(L)
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
for i in df.index:
df.loc[i, "col1"] = df.loc[i, "col1"][::-1]
L = df.col1.sum()
L = map(lambda x: str(x), L)
return ",".join(L)
def d... | 255 | 255 | 2Pandas | 1 | 0Difficult-Rewrite | 254 |
Problem:
I have a dataframe with one of its column having a list at each index. I want to concatenate these lists into one string like '1,2,3,4,5'. I am using
ids = str(df.loc[0:index, 'User IDs'].values.tolist())
However, this results in
'[[1,2,3,4......]]' which is not I want. Somehow each value in my list column... | def g(df):
L = df.col1.sum()
L = map(lambda x:str(x), L)
return ','.join(L)
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
L = df.col1.sum()
L = map(lambda x: str(x), L)
return ",".join(L)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.D... | 256 | 256 | 2Pandas | 1 | 0Difficult-Rewrite | 254 |
Problem:
I'm having a time series in form of a DataFrame that I can groupby to a series
pan.groupby(pan.Time).mean()
which has just two columns Time and Value:
Time Value
2015-04-24 06:38:49 0.023844
2015-04-24 06:39:19 0.019075
2015-04-24 06:43:49 0.023844
2015-04-24 06:44:18 0.019075
2015-04-24 06:... | def g(df):
df.set_index('Time', inplace=True)
df_group = df.groupby(pd.Grouper(level='Time', freq='2T'))['Value'].agg('mean')
df_group.dropna(inplace=True)
df_group = df_group.to_frame().reset_index()
return df_group
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df.set_index("Time", inplace=True)
df_group = df.groupby(pd.Grouper(level="Time", freq="2T"))["Value"].agg("mean")
df_group.dropna(inplace=True)
df_gro... | 257 | 257 | 2Pandas | 2 | 1Origin | 257 |
Problem:
I'm having a time series in form of a DataFrame that I can groupby to a series
pan.groupby(pan.Time).mean()
which has just two columns Time and Value:
Time Value
2015-04-24 06:38:49 0.023844
2015-04-24 06:39:19 0.019075
2015-04-24 06:43:49 0.023844
2015-04-24 06:44:18 0.019075
2015-04-24 06:... | def g(df):
df.set_index('Time', inplace=True)
df_group = df.groupby(pd.Grouper(level='Time', freq='3T'))['Value'].agg('sum')
df_group.dropna(inplace=True)
df_group = df_group.to_frame().reset_index()
return df_group
df = g(df.copy()) | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df.set_index("Time", inplace=True)
df_group = df.groupby(pd.Grouper(level="Time", freq="3T"))["Value"].agg("sum")
df_group.dropna(inplace=True)
df_grou... | 258 | 258 | 2Pandas | 2 | 2Semantic | 257 |
Problem:
i got an issue over ranking of date times. Lets say i have following table.
ID TIME
01 2018-07-11 11:12:20
01 2018-07-12 12:00:23
01 2018-07-13 12:00:00
02 2019-09-11 11:00:00
02 2019-09-12 12:00:00
and i want to add another column to rank the table by time for each id and group. I used
df... | def g(df):
df['TIME'] = pd.to_datetime(df['TIME'])
df['RANK'] = df.groupby('ID')['TIME'].rank(ascending=True)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["TIME"] = pd.to_datetime(df["TIME"])
df["RANK"] = df.groupby("ID")["TIME"].rank(ascending=True)
return df
def define_test_input(test_case_id):
... | 259 | 259 | 2Pandas | 1 | 1Origin | 259 |
Problem:
i got an issue over ranking of date times. Lets say i have following table.
ID TIME
01 2018-07-11 11:12:20
01 2018-07-12 12:00:23
01 2018-07-13 12:00:00
02 2019-09-11 11:00:00
02 2019-09-12 12:00:00
and i want to add another column to rank the table by time for each id and group. I used
df... | def g(df):
df['TIME'] = pd.to_datetime(df['TIME'])
df['RANK'] = df.groupby('ID')['TIME'].rank(ascending=False)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["TIME"] = pd.to_datetime(df["TIME"])
df["RANK"] = df.groupby("ID")["TIME"].rank(ascending=False)
return df
def define_test_input(test_case_id):
... | 260 | 260 | 2Pandas | 1 | 2Semantic | 259 |
Problem:
i got an issue over ranking of date times. Lets say i have following table.
ID TIME
01 2018-07-11 11:12:20
01 2018-07-12 12:00:23
01 2018-07-13 12:00:00
02 2019-09-11 11:00:00
02 2019-09-12 12:00:00
and i want to add another column to rank the table by time for each id and group. I used
df... | def g(df):
df['TIME'] = pd.to_datetime(df['TIME'])
df['TIME'] = df['TIME'].dt.strftime('%d-%b-%Y %a %T')
df['RANK'] = df.groupby('ID')['TIME'].rank(ascending=False)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["TIME"] = pd.to_datetime(df["TIME"])
df["TIME"] = df["TIME"].dt.strftime("%d-%b-%Y %a %T")
df["RANK"] = df.groupby("ID")["TIME"].rank(ascending=False)
... | 261 | 261 | 2Pandas | 1 | 0Difficult-Rewrite | 259 |
Problem:
There are many questions here with similar titles, but I couldn't find one that's addressing this issue.
I have dataframes from many different origins, and I want to filter one by the other. Using boolean indexing works great when the boolean series is the same size as the filtered dataframe, but not when th... | def g(df, filt):
return df[filt[df.index.get_level_values('a')].values]
result = g(df.copy(), filt.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, filt = data
return df[filt[df.index.get_level_values("a")].values]
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.Da... | 262 | 262 | 2Pandas | 2 | 1Origin | 262 |
Problem:
There are many questions here with similar titles, but I couldn't find one that's addressing this issue.
I have dataframes from many different origins, and I want to filter one by the other. Using boolean indexing works great when the boolean series is the same size as the filtered dataframe, but not when th... | def g(df, filt):
df = df[filt[df.index.get_level_values('a')].values]
return df[filt[df.index.get_level_values('b')].values]
result = g(df.copy(), filt.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, filt = data
df = df[filt[df.index.get_level_values("a")].values]
return df[filt[df.index.get_level_values("b")].values]
def define_test_input(test_c... | 263 | 263 | 2Pandas | 2 | 2Semantic | 262 |
Problem:
While nan == nan is always False, in many cases people want to treat them as equal, and this is enshrined in pandas.DataFrame.equals:
NaNs in the same location are considered equal.
Of course, I can write
def equalp(x, y):
return (x == y) or (math.isnan(x) and math.isnan(y))
However, this will fail o... | def g(df):
return df.columns[df.iloc[0,:].fillna('Nan') != df.iloc[8,:].fillna('Nan')]
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.columns[df.iloc[0, :].fillna("Nan") != df.iloc[8, :].fillna("Nan")]
def define_test_input(test_case_id):
if test_case_id == 1:
np.random.see... | 264 | 264 | 2Pandas | 1 | 1Origin | 264 |
Problem:
While nan == nan is always False, in many cases people want to treat them as equal, and this is enshrined in pandas.DataFrame.equals:
NaNs in the same location are considered equal.
Of course, I can write
def equalp(x, y):
return (x == y) or (math.isnan(x) and math.isnan(y))
However, this will fail o... | def g(df):
return df.columns[df.iloc[0,:].fillna('Nan') == df.iloc[8,:].fillna('Nan')]
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.columns[df.iloc[0, :].fillna("Nan") == df.iloc[8, :].fillna("Nan")]
def define_test_input(test_case_id):
if test_case_id == 1:
np.random.see... | 265 | 265 | 2Pandas | 1 | 2Semantic | 264 |
Problem:
While nan == nan is always False, in many cases people want to treat them as equal, and this is enshrined in pandas.DataFrame.equals:
NaNs in the same location are considered equal.
Of course, I can write
def equalp(x, y):
return (x == y) or (math.isnan(x) and math.isnan(y))
However, this will fail o... | def g(df):
return (df.columns[df.iloc[0,:].fillna('Nan') != df.iloc[8,:].fillna('Nan')]).values.tolist()
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return (
df.columns[df.iloc[0, :].fillna("Nan") != df.iloc[8, :].fillna("Nan")]
).values.tolist()
def define_test_input(test_case_id):
if test... | 266 | 266 | 2Pandas | 1 | 2Semantic | 264 |
Problem:
While nan == nan is always False, in many cases people want to treat them as equal, and this is enshrined in pandas.DataFrame.equals:
NaNs in the same location are considered equal.
Of course, I can write
def equalp(x, y):
return (x == y) or (math.isnan(x) and math.isnan(y))
However, this will fail o... | def g(df):
cols = (df.columns[df.iloc[0,:].fillna('Nan') != df.iloc[8,:].fillna('Nan')]).values
result = []
for col in cols:
result.append((df.loc[0, col], df.loc[8, col]))
return result
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
cols = (
df.columns[df.iloc[0, :].fillna("Nan") != df.iloc[8, :].fillna("Nan")]
).values
result = []
for col in cols:
result.ap... | 267 | 267 | 2Pandas | 1 | 0Difficult-Rewrite | 264 |
Problem:
Im attempting to convert a dataframe into a series using code which, simplified, looks like this:
dates = ['2016-1-{}'.format(i)for i in range(1,21)]
values = [i for i in range(20)]
data = {'Date': dates, 'Value': values}
df = pd.DataFrame(data)
df['Date'] = pd.to_datetime(df['Date'])
ts = pd.Series(df['Valu... | def g(df):
return pd.Series(df['Value'].values, index=df['Date'])
ts = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return pd.Series(df["Value"].values, index=df["Date"])
def define_test_input(test_case_id):
if test_case_id == 1:
dates = ["2016-1-{}".format(i) for i... | 268 | 268 | 2Pandas | 1 | 1Origin | 268 |
Problem:
I've seen similar questions but mine is more direct and abstract.
I have a dataframe with "n" rows, being "n" a small number.We can assume the index is just the row number. I would like to convert it to just one row.
So for example if I have
A,B,C,D,E
---------
1,2,3,4,5
6,7,8,9,10
11,12,13,14,5
I want as a... | def g(df):
df.index += 1
df_out = df.stack()
df.index -= 1
df_out.index = df_out.index.map('{0[1]}_{0[0]}'.format)
return df_out.to_frame().T
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df.index += 1
df_out = df.stack()
df.index -= 1
df_out.index = df_out.index.map("{0[1]}_{0[0]}".format)
return df_out.to_fr... | 269 | 269 | 2Pandas | 2 | 1Origin | 269 |
Problem:
I've seen similar questions but mine is more direct and abstract.
I have a dataframe with "n" rows, being "n" a small number.We can assume the index is just the row number. I would like to convert it to just one row.
So for example if I have
A,B,C,D,E
---------
1,2,3,4,5
6,7,8,9,10
11,12,13,14,5
I want as a... | def g(df):
df_out = df.stack()
df_out.index = df_out.index.map('{0[1]}_{0[0]}'.format)
return df_out.to_frame().T
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df_out = df.stack()
df_out.index = df_out.index.map("{0[1]}_{0[0]}".format)
return df_out.to_frame().T
def define_test_input(test_case... | 270 | 270 | 2Pandas | 2 | 2Semantic | 269 |
Problem:
pandas version: 1.2
I have a dataframe that columns as 'float64' with null values represented as pd.NAN. Is there way to round without converting to string then decimal:
df = pd.DataFrame([(.21, .3212), (.01, .61237), (.66123, .03), (.21, .18),(pd.NA, .18)],
columns=['dogs', 'cats'])
df
... | def g(df):
df['dogs'] = df['dogs'].apply(lambda x: round(x,2) if str(x) != '<NA>' else x)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["dogs"] = df["dogs"].apply(lambda x: round(x, 2) if str(x) != "<NA>" else x)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
... | 271 | 271 | 2Pandas | 2 | 1Origin | 271 |
Problem:
pandas version: 1.2
I have a dataframe that columns as 'float64' with null values represented as pd.NAN. Is there way to round without converting to string then decimal:
df = pd.DataFrame([(.21, .3212), (.01, .61237), (.66123, pd.NA), (.21, .18),(pd.NA, .18)],
columns=['dogs', 'cats'])
df
... | def g(df):
for i in df.index:
if str(df.loc[i, 'dogs']) != '<NA>' and str(df.loc[i, 'cats']) != '<NA>':
df.loc[i, 'dogs'] = round(df.loc[i, 'dogs'], 2)
df.loc[i, 'cats'] = round(df.loc[i, 'cats'], 2)
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
for i in df.index:
if str(df.loc[i, "dogs"]) != "<NA>" and str(df.loc[i, "cats"]) != "<NA>":
df.loc[i, "dogs"] = round(df.loc[i, "dogs"], 2)
... | 272 | 272 | 2Pandas | 2 | 0Difficult-Rewrite | 271 |
Problem:
I do know some posts are quite similar to my question but none of them succeded in giving me the correct answer. I want, for each row of a pandas dataframe, to perform the sum of values taken from several columns. As the number of columns tends to vary, I want this sum to be performed from a list of columns.
A... | def g(df, list_of_my_columns):
df['Sum'] = df[list_of_my_columns].sum(axis=1)
return df
df = g(df.copy(),list_of_my_columns.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, list_of_my_columns = data
df["Sum"] = df[list_of_my_columns].sum(axis=1)
return df
def define_test_input(test_case_id):
if test_case_id == 1... | 273 | 273 | 2Pandas | 1 | 1Origin | 273 |
Problem:
I do know some posts are quite similar to my question but none of them succeded in giving me the correct answer. I want, for each row of a pandas dataframe, to perform the average of values taken from several columns. As the number of columns tends to vary, I want this average to be performed from a list of co... | def g(df, list_of_my_columns):
df['Avg'] = df[list_of_my_columns].mean(axis=1)
return df
df = g(df.copy(),list_of_my_columns.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, list_of_my_columns = data
df["Avg"] = df[list_of_my_columns].mean(axis=1)
return df
def define_test_input(test_case_id):
if test_case_id == ... | 274 | 274 | 2Pandas | 1 | 2Semantic | 273 |
Problem:
I do know some posts are quite similar to my question but none of them succeded in giving me the correct answer. I want, for each row of a pandas dataframe, to perform the average of values taken from several columns. As the number of columns tends to vary, I want this average to be performed from a list of co... | def g(df, list_of_my_columns):
df['Avg'] = df[list_of_my_columns].mean(axis=1)
df['Min'] = df[list_of_my_columns].min(axis=1)
df['Max'] = df[list_of_my_columns].max(axis=1)
df['Median'] = df[list_of_my_columns].median(axis=1)
return df
df = g(df.copy(),list_of_my_columns.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, list_of_my_columns = data
df["Avg"] = df[list_of_my_columns].mean(axis=1)
df["Min"] = df[list_of_my_columns].min(axis=1)
df["Max"] = df[list_of_m... | 275 | 275 | 2Pandas | 1 | 0Difficult-Rewrite | 273 |
Problem:
I have a MultiIndexed pandas DataFrame that needs sorting by one of the indexers. Here is a snippet of the data:
gene VIM
treatment dose time
TGFb 0.1 2 -0.158406
1 2 0.039158
10 2 -0.052608
0.1 24 0.157153
... | def g(df):
return df.sort_index(level='time')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.sort_index(level="time")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
... | 276 | 276 | 2Pandas | 1 | 1Origin | 276 |
Problem:
I have a MultiIndexed pandas DataFrame that needs sorting by one of the indexers. Here is a snippet of the data:
gene VIM
treatment dose time
TGFb 0.1 2 -0.158406
1 2 0.039158
10 2 -0.052608
0.1 24 0.157153
... | def g(df):
return df.sort_values('VIM')
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.sort_values("VIM")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"VIM"... | 277 | 277 | 2Pandas | 1 | 2Semantic | 276 |
Problem:
I have a date column with data from 1 year in a pandas dataframe with a 1 minute granularity:
sp.head()
Open High Low Last Volume # of Trades OHLC Avg HLC Avg HL Avg Delta HiLodiff OCdiff div_Bar_Delta
Date
2019-06-13 15:30:00 2898.75 ... | def g(df):
to_delete = ['2020-02-17', '2020-02-18']
return df[~(df.index.strftime('%Y-%m-%d').isin(to_delete))]
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
to_delete = ["2020-02-17", "2020-02-18"]
return df[~(df.index.strftime("%Y-%m-%d").isin(to_delete))]
def define_test_input(test_case_id):
if test_case_id ... | 278 | 278 | 2Pandas | 1 | 1Origin | 278 |
Problem:
I have a date column with data from 1 year in a pandas dataframe with a 1 minute granularity:
sp.head()
Open High Low Last Volume # of Trades OHLC Avg HLC Avg HL Avg Delta HiLodiff OCdiff div_Bar_Delta
Date
2019-06-13 15:30:00 2898.75 ... | def g(df):
to_delete = ['2020-02-17', '2020-02-18']
df = df[~(df.index.strftime('%Y-%m-%d').isin(to_delete))]
df.index = df.index.strftime('%d-%b-%Y %A')
return df
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
to_delete = ["2020-02-17", "2020-02-18"]
df = df[~(df.index.strftime("%Y-%m-%d").isin(to_delete))]
df.index = df.index.strftime("%d-%b-%Y %A")
return d... | 279 | 279 | 2Pandas | 1 | 0Difficult-Rewrite | 278 |
Problem:
I have a square correlation matrix in pandas, and am trying to divine the most efficient way to return all values where the value (always a float -1 <= x <= 1) is above 0.3.
The pandas.DataFrame.filter method asks for a list of columns or a RegEx, but I always want to pass all columns in. Is there a best pra... | def g(corr):
corr_triu = corr.where(~np.tril(np.ones(corr.shape)).astype(bool))
corr_triu = corr_triu.stack()
corr_triu.name = 'Pearson Correlation Coefficient'
corr_triu.index.names = ['Col1', 'Col2']
return corr_triu[corr_triu > 0.3].to_frame()
result = g(corr.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
corr = data
corr_triu = corr.where(~np.tril(np.ones(corr.shape)).astype(bool))
corr_triu = corr_triu.stack()
corr_triu.name = "Pearson Correlation Coefficient"
c... | 280 | 280 | 2Pandas | 1 | 1Origin | 280 |
Problem:
I have a square correlation matrix in pandas, and am trying to divine the most efficient way to return all values where the value (always a float -1 <= x <= 1) is above 0.3.
The pandas.DataFrame.filter method asks for a list of columns or a RegEx, but I always want to pass all columns in. Is there a best pra... | def g(corr):
corr_triu = corr.where(~np.tril(np.ones(corr.shape)).astype(bool))
corr_triu = corr_triu.stack()
return corr_triu[corr_triu > 0.3]
result = g(corr.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
corr = data
corr_triu = corr.where(~np.tril(np.ones(corr.shape)).astype(bool))
corr_triu = corr_triu.stack()
return corr_triu[corr_triu > 0.3]
def define_test_input... | 281 | 281 | 2Pandas | 1 | 2Semantic | 280 |
Problem:
I need to rename only the last column in my dataframe, the issue is there are many columns with the same name (there is a reason for this), thus I cannot use the code in other examples online. Is there a way to use something specific that just isolates the final column?
I have tried to do something like this
d... | def g(df):
return df.set_axis([*df.columns[:-1], 'Test'], axis=1, inplace=False)
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.set_axis([*df.columns[:-1], "Test"], axis=1, inplace=False)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame([[1... | 282 | 282 | 2Pandas | 1 | 1Origin | 282 |
Problem:
I need to rename only the first column in my dataframe, the issue is there are many columns with the same name (there is a reason for this), thus I cannot use the code in other examples online. Is there a way to use something specific that just isolates the first column?
I have tried to do something like this
... | def g(df):
return df.set_axis(['Test', *df.columns[1:]], axis=1, inplace=False)
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.set_axis(["Test", *df.columns[1:]], axis=1, inplace=False)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame([[1,... | 283 | 283 | 2Pandas | 1 | 2Semantic | 282 |
Problem:
I have a dataset with binary values. I want to find out frequent value in each row. This dataset have couple of millions records. What would be the most efficient way to do it? Following is the sample of the dataset.
import pandas as pd
data = pd.read_csv('myData.csv', sep = ',')
data.head()
bit1 bit2 bi... | def g(df):
df['frequent'] = df.mode(axis=1)
for i in df.index:
df.loc[i, 'freq_count'] = (df.iloc[i]==df.loc[i, 'frequent']).sum() - 1
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["frequent"] = df.mode(axis=1)
for i in df.index:
df.loc[i, "freq_count"] = (df.iloc[i] == df.loc[i, "frequent"]).sum() - 1
return df
def de... | 284 | 284 | 2Pandas | 1 | 1Origin | 284 |
Problem:
I have a dataset with integer values. I want to find out frequent value in each row. This dataset have couple of millions records. What would be the most efficient way to do it? Following is the sample of the dataset.
import pandas as pd
data = pd.read_csv('myData.csv', sep = ',')
data.head()
bit1 bit2 b... | def g(df):
df['frequent'] = df.mode(axis=1)
for i in df.index:
df.loc[i, 'freq_count'] = (df.iloc[i]==df.loc[i, 'frequent']).sum() - 1
return df
df = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["frequent"] = df.mode(axis=1)
for i in df.index:
df.loc[i, "freq_count"] = (df.iloc[i] == df.loc[i, "frequent"]).sum() - 1
return df
def de... | 285 | 285 | 2Pandas | 1 | 2Semantic | 284 |
Problem:
I have a dataset with integer values. I want to find out frequent value in each row. If there's multiple frequent value, present them as a list. This dataset have couple of millions records. What would be the most efficient way to do it? Following is the sample of the dataset.
import pandas as pd
data = pd.rea... | def g(df):
cols = list(df)
Mode = df.mode(axis=1)
df['frequent'] = df['bit1'].astype(object)
for i in df.index:
df.at[i, 'frequent'] = []
for i in df.index:
for col in list(Mode):
if pd.isna(Mode.loc[i, col])==False:
df.at[i, 'frequent'].append(Mode.loc[i,... | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
cols = list(df)
Mode = df.mode(axis=1)
df["frequent"] = df["bit1"].astype(object)
for i in df.index:
df.at[i, "frequent"] = []
for ... | 286 | 286 | 2Pandas | 1 | 0Difficult-Rewrite | 284 |
Problem:
Hy there.
I have a pandas DataFrame (df) like this:
foo id1 bar id2
0 8.0 1 NULL 1
1 5.0 1 NULL 1
2 3.0 1 NULL 1
3 4.0 1 1 2
4 7.0 1 3 2
5 9.0 1 4 3
6 5.0 1 2 3
7 7.0 1 3 1
...
I want to group by id1 and id2 and try to g... | def g(df):
df['bar'] = pd.to_numeric(df['bar'], errors='coerce')
res = df.groupby(["id1", "id2"])[["foo", "bar"]].mean()
return res
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["bar"] = pd.to_numeric(df["bar"], errors="coerce")
res = df.groupby(["id1", "id2"])[["foo", "bar"]].mean()
return res
def define_test_input(test_case_i... | 287 | 287 | 2Pandas | 2 | 1Origin | 287 |
Problem:
Hy there.
I have a pandas DataFrame (df) like this:
foo id1 bar id2
0 8.0 1 NULL 1
1 5.0 1 NULL 1
2 3.0 1 NULL 1
3 4.0 1 1 2
4 7.0 1 3 2
5 9.0 1 4 3
6 5.0 1 2 3
7 7.0 1 3 1
...
I want to group by id1 and id2 and try to g... | def g(df):
df['bar'] = df['bar'].replace("NULL", 0)
res = df.groupby(["id1", "id2"])[["foo", "bar"]].mean()
return res
result = g(df.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["bar"] = df["bar"].replace("NULL", 0)
res = df.groupby(["id1", "id2"])[["foo", "bar"]].mean()
return res
def define_test_input(test_case_id):
i... | 288 | 288 | 2Pandas | 2 | 0Difficult-Rewrite | 287 |
Problem:
Context
I'm trying to merge two big CSV files together.
Problem
Let's say I've one Pandas DataFrame like the following...
EntityNum foo ...
------------------------
1001.01 100
1002.02 50
1003.03 200
And another one like this...
EntityNum a_col b_col
-------------------------------... | def g(df_a, df_b):
return df_a[['EntityNum', 'foo']].merge(df_b[['EntityNum', 'a_col']], on='EntityNum', how='left')
result = g(df_a.copy(), df_b.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df_a, df_b = data
return df_a[["EntityNum", "foo"]].merge(
df_b[["EntityNum", "a_col"]], on="EntityNum", how="left"
)
def define_test_input(... | 289 | 289 | 2Pandas | 2 | 1Origin | 289 |
Problem:
Context
I'm trying to merge two big CSV files together.
Problem
Let's say I've one Pandas DataFrame like the following...
EntityNum foo ...
------------------------
1001.01 100
1002.02 50
1003.03 200
And another one like this...
EntityNum a_col b_col
-------------------------------... | def g(df_a, df_b):
return df_a[['EntityNum', 'foo']].merge(df_b[['EntityNum', 'b_col']], on='EntityNum', how='left')
result = g(df_a.copy(), df_b.copy())
| import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df_a, df_b = data
return df_a[["EntityNum", "foo"]].merge(
df_b[["EntityNum", "b_col"]], on="EntityNum", how="left"
)
def define_test_input(... | 290 | 290 | 2Pandas | 2 | 2Semantic | 289 |
Problem:
How do I get the dimensions of an array? For instance, this is (2, 2):
a = np.array([[1,2],[3,4]])
A:
<code>
import numpy as np
a = np.array([[1,2],[3,4]])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| result = a.shape
| import numpy as np
import pandas as pd
import copy
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
a = np.array([[1, 2], [3, 4]])
elif test_case_id == 2:
np.random.seed(42)
dim1, dim2 = np.random.randint(1, 100, (... | 291 | 0 | 1Numpy | 4 | 1Origin | 0 |
Problem:
I want to figure out how to remove nan values from my array.
For example, My array looks something like this:
x = [1400, 1500, 1600, nan, nan, nan ,1700] #Not in this exact configuration
How can I remove the nan values from x to get sth like:
x = [1400, 1500, 1600, 1700]
A:
<code>
import numpy as np
x = np.ar... | x = x[~np.isnan(x)]
| import numpy as np
import copy
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
x = np.array([1400, 1500, 1600, np.nan, np.nan, np.nan, 1700])
elif test_case_id == 2:
np.random.seed(42)
x = np.random.rand(20)
... | 292 | 1 | 1Numpy | 2 | 1Origin | 1 |
Problem:
I want to figure out how to replace nan values from my array with np.inf.
For example, My array looks something like this:
x = [1400, 1500, 1600, nan, nan, nan ,1700] #Not in this exact configuration
How can I replace the nan values from x?
A:
<code>
import numpy as np
x = np.array([1400, 1500, 1600, np.nan, ... | x[np.isnan(x)] = np.inf
| import numpy as np
import copy
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
x = np.array([1400, 1500, 1600, np.nan, np.nan, np.nan, 1700])
elif test_case_id == 2:
np.random.seed(42)
x = np.random.rand(20)
... | 293 | 2 | 1Numpy | 2 | 2Semantic | 1 |
Problem:
I want to figure out how to remove nan values from my array.
For example, My array looks something like this:
x = [[1400, 1500, 1600, nan], [1800, nan, nan ,1700]] #Not in this exact configuration
How can I remove the nan values from x?
Note that after removing nan, the result cannot be np.array due to dimens... | result = [x[i, row] for i, row in enumerate(~np.isnan(x))]
| import numpy as np
import copy
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
x = np.array([[1400, 1500, 1600, np.nan], [1800, np.nan, np.nan, 1700]])
elif test_case_id == 2:
x = np.array([[1, 2, np.nan], [3, np.nan, np.nan]... | 294 | 3 | 1Numpy | 3 | 0Difficult-Rewrite | 1 |
Problem:
Let's say I have a 1d numpy positive integer array like this:
a = array([1,0,3])
I would like to encode this as a 2D one-hot array(for natural number)
b = array([[0,1,0,0], [1,0,0,0], [0,0,0,1]])
The leftmost element corresponds to 0 in `a`(NO MATTER whether 0 appears in `a` or not.), and the rightmost vice ve... | b = np.zeros((a.size, a.max()+1))
b[np.arange(a.size), a]=1
| import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
a = np.array([1, 0, 3])
elif test_case_id == 2:
np.random.seed(42)
a = np.random.randint(0, 20, 50)
return... | 295 | 4 | 1Numpy | 2 | 1Origin | 4 |
Problem:
Let's say I have a 1d numpy positive integer array like this
a = array([1,2,3])
I would like to encode this as a 2D one-hot array(for natural number)
b = array([[0,1,0,0], [0,0,1,0], [0,0,0,1]])
The leftmost element corresponds to 0 in `a`(NO MATTER whether 0 appears in `a` or not.), and the rightmost correspo... | b = np.zeros((a.size, a.max()+1))
b[np.arange(a.size), a]=1
| import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
a = np.array([1, 0, 3])
elif test_case_id == 2:
np.random.seed(42)
a = np.random.randint(0, 20, 50)
return... | 296 | 5 | 1Numpy | 2 | 3Surface | 4 |
Problem:
Let's say I have a 1d numpy integer array like this
a = array([-1,0,3])
I would like to encode this as a 2D one-hot array(for integers)
b = array([[1,0,0,0,0], [0,1,0,0,0], [0,0,0,0,1]])
The leftmost element always corresponds to the smallest element in `a`, and the rightmost vice versa.
Is there a quick way t... | temp = a - a.min()
b = np.zeros((a.size, temp.max()+1))
b[np.arange(a.size), temp]=1
| import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
a = np.array([-1, 0, 3])
elif test_case_id == 2:
np.random.seed(42)
a = np.random.randint(-5, 20, 50)
retu... | 297 | 6 | 1Numpy | 2 | 2Semantic | 4 |
Problem:
Let's say I have a 1d numpy array like this
a = np.array([1.5,-0.4,1.3])
I would like to encode this as a 2D one-hot array(only for elements appear in `a`)
b = array([[0,0,1], [1,0,0], [0,1,0]])
The leftmost element always corresponds to the smallest element in `a`, and the rightmost vice versa.
Is there a qui... | vals, idx = np.unique(a, return_inverse=True)
b = np.zeros((a.size, vals.size))
b[np.arange(a.size), idx] = 1 | import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
a = np.array([1.5, -0.4, 1.3])
elif test_case_id == 2:
np.random.seed(42)
a = np.random.rand(20)
elif test... | 298 | 7 | 1Numpy | 3 | 0Difficult-Rewrite | 4 |
Problem:
Let's say I have a 2d numpy integer array like this
a = array([[1,0,3], [2,4,1]])
I would like to encode this as a 2D one-hot array(in C order, e.g., a[1,1] corresponds to b[4]) for integers.
b = array([[0,1,0,0,0], [1,0,0,0,0], [0,0,0,1,0], [0,0,1,0,0], [0,0,0,0,1], [0,1,0,0,0]])
The leftmost element always c... | temp = (a - a.min()).ravel()
b = np.zeros((a.size, temp.max()+1))
b[np.arange(a.size), temp]=1
| import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def define_test_input(test_case_id):
if test_case_id == 1:
a = np.array([[1, 0, 3], [2, 4, 1]])
elif test_case_id == 2:
np.random.seed(42)
a = np.random.randint(0, 20, (10, ... | 299 | 8 | 1Numpy | 2 | 0Difficult-Rewrite | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.