question_id int64 59.8M 70.5M | question_title stringlengths 17 135 | question_body stringlengths 274 3.35k | accepted_answer_id int64 59.8M 70.5M | question_creation_date timestamp[us] | question_answer_count int64 1 4 | question_favorite_count float64 0 3 ⌀ | question_score int64 -2 5 | question_view_count int64 20 1.17k | tags stringclasses 2
values | answer_body stringlengths 65 4.03k | answer_creation_date timestamp[us] | answer_score int64 0 5 | link stringlengths 49 87 | context stringlengths 1.11k 251k | answer_start int64 0 1.25k | answer_end int64 187 3.53k | question stringlengths 263 3.22k | predicted_answer stringclasses 24
values | parsed_answer stringlengths 41 3.53k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63,701,878 | Convert series from pandas DataFrame to string | <p>For my dataframe</p>
<pre><code>df = pd.DataFrame({
cat: ['a','a','a','b','b','b'],
step: [1,3,2, 2,1,3],
Id: [101,103,102, 902,901,903] })
</code></pre>
<p>I need to get ID values as string on output using STEP values as ordering clause:</p>
<pre><code>cat_a: '101,102,103'
cat_b: '901,902,903'
</code></pre>
<p>I tr... | 63,701,919 | 2020-09-02T08:45:06.357000 | 1 | null | 1 | 37 | python|pandas | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> by both columns first for expected order and then aggregate <code>join</code> with lambda method with convert to <code>string</code>s:</p>
<pre><c... | 2020-09-02T08:47:22.353000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_string.html | pandas.DataFrame.to_string#
pandas.DataFrame.to_string#
DataFrame.to_string(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', line_width=No... | 383 | 697 | Convert series from pandas DataFrame to string
For my dataframe
df = pd.DataFrame({
cat: ['a','a','a','b','b','b'],
step: [1,3,2, 2,1,3],
Id: [101,103,102, 902,901,903] })
I need to get ID values as string on output using STEP values as ordering clause:
cat_a: '101,102,103'
cat_b: '901,902,903'
I try this with heavy ... | / | Use DataFrame.sort_values by both columns first for expected order and then aggregate join with lambda method with convert to strings:
d = (df.sort_values(['cat','step'])
.groupby('cat')['Id']
.agg(lambda x: ','.join(x.astype(str)))
.to_dict())
print (d)
{'a': '101,102,103', 'b': '901,902,903'}
|
67,914,151 | Filtering only 1 column in a df without returning the entire DF in 1 line | <p>I'm hoping that there is a way i can return a series from df while im filtering it in 1 line.
Is there a way I could return a column from my df after I filter it?
Currently my process is something like this</p>
<pre><code>df = df[df['a'] > 0 ]
list = df['a']
</code></pre> | 67,915,627 | 2021-06-10T03:10:52.427000 | 1 | null | 1 | 41 | python|pandas | <p>The <code>df.loc</code> syntax is the preferred way to do this, as @JohnM wrote in his comment, though I find the syntax from @Don'tAccept more readable and scaleable however since it can handle cases like column names with spaces in them. These combine like:</p>
<pre><code>df.loc[df['a'] > 0, 'a']
</code></pre>
... | 2021-06-10T06:08:47.610000 | 0 | https://pandas.pydata.org/docs/user_guide/groupby.html | Group by: split-apply-combine#
The df.loc syntax is the preferred way to do this, as @JohnM wrote in his comment, though I find the syntax from @Don'tAccept more readable and scaleable however since it can handle cases like column names with spaces in them. These combine like:
df.loc[df['a'] > 0, 'a']
Note this is ex... | 32 | 671 | Filtering only 1 column in a df without returning the entire DF in 1 line
I'm hoping that there is a way i can return a series from df while im filtering it in 1 line.
Is there a way I could return a column from my df after I filter it?
Currently my process is something like this
df = df[df['a'] > 0 ]
list = df['a']
| The df.loc syntax is the preferred way to do this, as @JohnM wrote in his comment, though I find the syntax from @Don'tAccept more readable and scaleable however since it can handle cases like column names with spaces in them. These combine like:
df.loc[df['a'] > 0, 'a']
Note this is expandable to provide multiple col... | |
64,239,252 | Time series data merge nearest right dataset has multiple same values | <p>I have two dataframes. The first is like a log while the second is like inputs. I want to combine this log and inputs based on their time columns.</p>
<p>I tried using <code>merge_asof</code> but it only takes one input into the input dateframe.</p>
<p>Here is an example. Dataframe Log Times, <code>log</code>:</p>
<... | 64,239,730 | 2020-10-07T07:28:00.140000 | 1 | null | 0 | 42 | python|pandas | <p>First, make sure that the <code>IO_Time</code> and <code>STARTTIME_Log</code> columns are of datetime type and are sorted (required to use <code>merge_asof</code>:</p>
<pre><code>log['STARTTIME_Log'] = pd.to_datetime(log['STARTTIME_Log'])
input['IO_Time'] = pd.to_datetime(input['IO_Time'])
log = log.sort_values('ST... | 2020-10-07T08:00:18.793000 | 0 | https://pandas.pydata.org/docs/dev/user_guide/merging.html | Merge, join, concatenate and compare#
First, make sure that the IO_Time and STARTTIME_Log columns are of datetime type and are sorted (required to use merge_asof:
log['STARTTIME_Log'] = pd.to_datetime(log['STARTTIME_Log'])
input['IO_Time'] = pd.to_datetime(input['IO_Time'])
log = log.sort_values('STARTTIME_Log')
inp... | 40 | 1,193 | Time series data merge nearest right dataset has multiple same values
I have two dataframes. The first is like a log while the second is like inputs. I want to combine this log and inputs based on their time columns.
I tried using merge_asof but it only takes one input into the input dateframe.
Here is an example. Data... | First, make sure that the IO_Time and STARTTIME_Log columns are of datetime type and are sorted (required to use merge_asof:
log['STARTTIME_Log'] = pd.to_datetime(log['STARTTIME_Log'])
input['IO_Time'] = pd.to_datetime(input['IO_Time'])
log = log.sort_values('STARTTIME_Log')
input = input.sort_values('IO_Time')
Now, ... | |
66,867,941 | Getting an error when checking if values in a list match a column PANDAS | <p>I'm just wondering how one might overcome the below error.</p>
<p><strong>AttributeError: 'list' object has no attribute 'str'</strong></p>
<p>What I am trying to do is create a new column "PrivilegedAccess" and in this column I want to write "True" if any of the names in the first_names column m... | 66,867,973 | 2021-03-30T09:09:32.117000 | 2 | null | 1 | 44 | python|pandas | <p>It seems you need select one column for <code>str.contains</code> and then use map or convert boolean to strings:</p>
<pre><code>Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name... | 2021-03-30T09:11:26.167000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html | It seems you need select one column for str.contains and then use map or convert boolean to strings:
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
... | 0 | 1,851 | Getting an error when checking if values in a list match a column PANDAS
I'm just wondering how one might overcome the below error.
AttributeError: 'list' object has no attribute 'str'
What I am trying to do is create a new column "PrivilegedAccess" and in this column I want to write "True" if any of the names in the f... | It seems you need select one column for str.contains and then use map or convert boolean to strings:
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
... | |
59,822,568 | How i can change invalid string pattern with default string in dataframe? | <p>i have a dataframe like below.</p>
<pre><code>name birthdate
-----------------
john 21011990
steve 14021986
bob
alice 13020198
</code></pre>
<p>i want to detect invalid value in birthdate column then change value.</p>
<p>the birthdate column use date format is "DDMMYYYY" . but in dataframe have a inval... | 59,822,864 | 2020-01-20T11:43:04.543000 | 3 | null | 1 | 57 | python|pandas | <p>You can first create non-valid date mask and then update their values:</p>
<pre><code>mask = df.birthdate.apply(lambda x: pd.to_datetime(x, format='%d%m%Y', errors='coerce')).isna()
df.loc[mask, 'birthdate'] = 31125000
name birthdate
0 john 21011990
1 steve 14021986
2 bob 31125000
3 alice ... | 2020-01-20T12:01:26.027000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html | pandas.to_datetime#
pandas.to_datetime#
pandas.to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, utc=None, format=None, exact=True, unit=None, infer_datetime_format=False, origin='unix', cache=True)[source]#
You can first create non-valid date mask and then update their values:
mask = df.birthdate.a... | 228 | 540 | How i can change invalid string pattern with default string in dataframe?
i have a dataframe like below.
name birthdate
-----------------
john 21011990
steve 14021986
bob
alice 13020198
i want to detect invalid value in birthdate column then change value.
the birthdate column use date format is "DDMMYYYY" .... | You can first create non-valid date mask and then update their values:
mask = df.birthdate.apply(lambda x: pd.to_datetime(x, format='%d%m%Y', errors='coerce')).isna()
df.loc[mask, 'birthdate'] = 31125000
name birthdate
0 john 21011990
1 steve 14021986
2 bob 31125000
3 alice 31125000
| |
62,441,689 | Pandas Groupby Ranges when ranges are not continuous | <p>I have a dataframe that looks like this:</p>
<pre><code>id | A | B | C
------------------------------
1 | 0.1 | 1.2 | 100
2 | 0.2 | 1.4 | 200
3 | 0.3 | 1.6 | 300
4 | 0.4 | 1.8 | 400
5 | 0.5 | 2.0 | 500
6 | 0.6 | 2.2 | 600
7 |... | 62,442,506 | 2020-06-18T03:07:44.227000 | 2 | null | 1 | 61 | python|pandas | <p>How about just using the apply function to generate the metrics you need.</p>
<pre><code>df2 = pd.DataFrame({'A_bins': [(0.1,1.1), (0.2,1.1), (0.4,1.1), (0.6,1.1), (0.8,1.1), (1.0,1.1)]})
def get_sum(row): # this is where the logic for your metrics goes
return df.loc[(row['A_bins'][0]<df['A']) & (row['A_... | 2020-06-18T04:39:35.983000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.cut.html | pandas.cut#
pandas.cut#
pandas.cut(x, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False, duplicates='raise', ordered=True)[source]#
Bin values into discrete intervals.
Use cut when you need to segment and sort data values into bins. This
function is also useful for going from a continuo... | 338 | 883 | Pandas Groupby Ranges when ranges are not continuous
I have a dataframe that looks like this:
id | A | B | C
------------------------------
1 | 0.1 | 1.2 | 100
2 | 0.2 | 1.4 | 200
3 | 0.3 | 1.6 | 300
4 | 0.4 | 1.8 | 400
5 | 0.5 | 2.0 | 500
6 ... | How about just using the apply function to generate the metrics you need.
df2 = pd.DataFrame({'A_bins': [(0.1,1.1), (0.2,1.1), (0.4,1.1), (0.6,1.1), (0.8,1.1), (1.0,1.1)]})
def get_sum(row): # this is where the logic for your metrics goes
return df.loc[(row['A_bins'][0]<df['A']) & (row['A_bins'][1]>=df['A']),'C'].s... | |
69,511,132 | How to add positive and negative increments to every row based on a specific date? | <p>I have a pandas df which has 2 columns such as <code>Date, First_Date (constant)</code>.</p>
<p>I am trying to add a new column in which the value will be 0 where First_Date=Date. Then, all rows below that instance should increment in a negative way such as -1, -2, -3 etc.. and same should be true for rows above sho... | 69,511,213 | 2021-10-09T23:05:50.443000 | 2 | null | 0 | 101 | python|pandas | <pre><code>>>> df = pd.DataFrame({'Date':pd.date_range('2020-01-01', '2020-01-18')})
>>> df
Date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
5 2020-01-06
6 2020-01-07
7 2020-01-08
8 2020-01-09
9 2020-01-10
10 2020-01-11
11 2020-01-12
12 2020-01-13
13 2020-01-14
... | 2021-10-09T23:27:09.653000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.Index.shift.html | >>> df = pd.DataFrame({'Date':pd.date_range('2020-01-01', '2020-01-18')})
>>> df
Date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
5 2020-01-06
6 2020-01-07
7 2020-01-08
8 2020-01-09
9 2020-01-10
10 2020-01-11
11 2020-01-12
12 2020-01-13
13 2020-01-14
14 2020-01-15
15 2020-01-16
1... | 0 | 1,466 | How to add positive and negative increments to every row based on a specific date?
I have a pandas df which has 2 columns such as Date, First_Date (constant).
I am trying to add a new column in which the value will be 0 where First_Date=Date. Then, all rows below that instance should increment in a negative way such as... | / | >>> df = pd.DataFrame({'Date':pd.date_range('2020-01-01', '2020-01-18')})
>>> df
Date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
5 2020-01-06
6 2020-01-07
7 2020-01-08
8 2020-01-09
9 2020-01-10
10 2020-01-11
11 2020-01-12
12 2020-01-13
13 2020-01-14
14 2020-01-15
15 2020-01-16
1... |
69,712,773 | Remove duplicates that are in included in two columns in pandas | <p>I have a dataframe that has two columns. I want to delete rows such that, for each row, it includes only one instance in the first column, but all unique values in column two are included.</p>
<p>Here is an example:</p>
<pre><code>data = [[1,100],
[1,101],
[1,102],
[1,103],
[2,102],
[2,... | 69,713,006 | 2021-10-25T18:02:25.890000 | 2 | 1 | 2 | 105 | python|pandas | <p>One way is to use a <code>set</code> and create custom function:</p>
<pre><code>seen = set()
def func(d):
res = d[~d.isin(seen)]
if len(res):
cur = res.iat[0]
seen.add(cur)
return cur
print (df.groupby("x")["y"].apply(func))
x
1 100
2 102
3 107
Name: y,... | 2021-10-25T18:24:46.800000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html | pandas.DataFrame.drop_duplicates#
One way is to use a set and create custom function:
seen = set()
def func(d):
res = d[~d.isin(seen)]
if len(res):
cur = res.iat[0]
seen.add(cur)
return cur
print (df.groupby("x")["y"].apply(func))
x
1 100
2 102
3 107
Name: y, dtype: int64
... | 35 | 318 | Remove duplicates that are in included in two columns in pandas
I have a dataframe that has two columns. I want to delete rows such that, for each row, it includes only one instance in the first column, but all unique values in column two are included.
Here is an example:
data = [[1,100],
[1,101],
[1,102],
... | One way is to use a set and create custom function:
seen = set()
def func(d):
res = d[~d.isin(seen)]
if len(res):
cur = res.iat[0]
seen.add(cur)
return cur
print (df.groupby("x")["y"].apply(func))
x
1 100
2 102
3 107
Name: y, dtype: int64
| |
68,385,969 | Calculate Time Between Orders By Customer ID | <p>I have following Porblem:</p>
<p>I want to calculate the time between orders for every Customer in Days.
My Dataframe looks like below.</p>
<pre><code> CustID OrderDate Sales
5 16838 2015-05-13 197.00
6 17986 2015-12-18 224.90
7 18191 2015-11-10 325.80
8 18191 2015-02-09 43.80
9 ... | 68,387,807 | 2021-07-14T22:59:18.797000 | 1 | null | 0 | 117 | python|pandas | <p>You need to convert the date column to a datetime first and also put it in chronological order. This code should dot he trick:</p>
<pre><code>data.OrderDate = pd.to_datetime(data.OrderDate)
data = data.sort_values(by=['OrderDate'])
data['days'] = data.groupby('CustID').OrderDate.apply(lambda x: x.diff())
</code></pr... | 2021-07-15T04:12:05.180000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html | pandas.DataFrame.diff#
pandas.DataFrame.diff#
DataFrame.diff(periods=1, axis=0)[source]#
First discrete difference of element.
Calculates the difference of a DataFrame element compared with another
element in the DataFrame (default is element in previous row).
You need to convert the date column to a datetime first... | 265 | 710 | Calculate Time Between Orders By Customer ID
I have following Porblem:
I want to calculate the time between orders for every Customer in Days.
My Dataframe looks like below.
CustID OrderDate Sales
5 16838 2015-05-13 197.00
6 17986 2015-12-18 224.90
7 18191 2015-11-10 325.80
8 18191 2015... | You need to convert the date column to a datetime first and also put it in chronological order. This code should dot he trick:
data.OrderDate = pd.to_datetime(data.OrderDate)
data = data.sort_values(by=['OrderDate'])
data['days'] = data.groupby('CustID').OrderDate.apply(lambda x: x.diff())
Notice that this gives the d... | |
65,677,018 | How to generate a list of names associated with specific letter grade using regex in python pandas | <p>I'm starting with this code but it generates a list of only last names and letter grades in the format ['First Last: A']. What expression can I use to create a list of names associated with a letter grade A in the format ['First', 'Last'] with names extracted from only A letter grades? More specifically, I'd like to... | 65,677,441 | 2021-01-12T01:55:02.370000 | 2 | null | 0 | 409 | python|pandas | <p>Ther are so much ways, depending on what you input data:</p>
<pre><code>re.split(':', 'First Last: Grade')
# ['First Last', ' Grade']
re.findall('^(.*?):', 'First Last: Grade')
# ['First Last']
re.findall('^(\w+\s?\w*):', 'First Last: Grade')
# ['First Last']
</code></pre> | 2021-01-12T02:59:16.437000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.Series.str.count.html | pandas.Series.str.count#
pandas.Series.str.count#
Series.str.count(pat, flags=0)[source]#
Count occurrences of pattern in each string of the Series/Index.
This function is used to count the number of times a particular regex
Ther are so much ways, depending on what you input data:
re.split(':', 'First Last: Grade')... | 229 | 477 | How to generate a list of names associated with specific letter grade using regex in python pandas
I'm starting with this code but it generates a list of only last names and letter grades in the format ['First Last: A']. What expression can I use to create a list of names associated with a letter grade A in the format ... | Index ( [ ' A ', ' A ', ' Aaba ', ' cat ' ] ). str. count / | Ther are so much ways, depending on what you input data:
re.split(':', 'First Last: Grade')
# ['First Last', ' Grade']
re.findall('^(.*?):', 'First Last: Grade')
# ['First Last']
re.findall('^(\w+\s?\w*):', 'First Last: Grade')
# ['First Last']
|
End of preview. Expand in Data Studio
- Downloads last month
- 4