question_id int64 59.6M 70.5M | question_title stringlengths 15 150 | question_body stringlengths 134 33.4k | accepted_answer_id int64 59.6M 73.3M | question_creation_date timestamp[us] | question_answer_count int64 1 9 | question_favorite_count float64 0 8 ⌀ | question_score int64 -6 52 | question_view_count int64 10 79k | tags stringclasses 2
values | answer_body stringlengths 48 16.3k | answer_creation_date timestamp[us] | answer_score int64 -2 59 | link stringlengths 31 107 | context stringlengths 134 251k | answer_start int64 0 1.28k | answer_end int64 49 10.2k | question stringlengths 158 33.1k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
62,318,418 | Pandas: Merge two Dataframes (same columns) with condition... How can i improve this code? | <p>(Sorry, my english skills is bad...)</p>
<p>I'm studying with public data.
I'm trying merge two excel files with some condition.
I tried multi-loop code, but it's too slow...
How can I improve my code?</p>
<p>Please help me TvT</p>
<h1>DataStructure example is</h1>
<p>old data(entire_file.xlsx)</p>
<pre><code> ... | 62,318,673 | 2020-06-11T06:34:19.763000 | 3 | null | 0 | 780 | python|pandas | <p>If you just want to get the new data or the updated but not existing:</p>
<pre><code>result = pd.concat([data, tmp], ignore_index=True, sort=False)
result = result.sort_values(['KeyCode', 'Date'], ascending=[True,True]) # order to find duplicates later
result = result.drop_duplicates('KeyCode', keep='first') # dr... | 2020-06-11T06:52:25.613000 | 0 | https://pandas.pydata.org/docs/dev/user_guide/merging.html | Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also pro... | 571 | 884 | Pandas: Merge two Dataframes (same columns) with condition... How can i improve this code?
(Sorry, my english skills is bad...)
I'm studying with public data.
I'm trying merge two excel files with some condition.
I tried multi-loop code, but it's too slow...
How can I improve my code?
Please help me TvT
DataStructure e... |
65,874,915 | pandas python get table with the first date of event in every year, each country, alternative groupby | <p>Who can help, I'm trying to group this table here ( <a href="https://i.stack.imgur.com/bG1qG.jpg" rel="nofollow noreferrer">original table</a> ) with tables : (country, year, date of the earthquake) in this form: the first earthquake in every year, each country. I was able to group through groupby, ( <a href="https:... | 65,875,225 | 2021-01-24T19:24:12.550000 | 1 | null | -1 | 24 | python|pandas | <p>Once you get your <code>groupby</code> use <code>df = df.reset_index()</code>.
This will bring the columns you used in the groupby to columns and will get you the result you want</p> | 2021-01-24T19:54:19.217000 | 0 | https://pandas.pydata.org/docs/user_guide/visualization.html | Chart visualization#
Chart visualization#
Note
The examples below assume that you’re using Jupyter.
This section demonstrates visualization through charting. For information on
visualization of tabular data please see the section on Table Visualization.
We use the standard convention for referencing the matplotlib A... | 508 | 660 | pandas python get table with the first date of event in every year, each country, alternative groupby
Who can help, I'm trying to group this table here ( original table ) with tables : (country, year, date of the earthquake) in this form: the first earthquake in every year, each country. I was able to group through gro... |
63,177,142 | Change column values into rating and sum | <p>Change the column values and sum the row according to conditions.</p>
<pre><code>d = {'col1': [20, 40], 'col2': [30, 40],'col3':[200,300}
df = pd.DataFrame(data=d)
col1 col2 col3
0 20 30 200
1 40 40 300
Col4 shoud give back the sum of the row after the values have been tranfered to ... | 63,181,949 | 2020-07-30T16:09:53.107000 | 1 | null | 1 | 26 | pandas | <p>Use pd. cut as follows. Values didnt add up though. Happy to asist further if clarified.</p>
<p>pd.cut to bin and save in new columnms suffixed withname Points. Select only columns with string Points and add.</p>
<pre><code>df['col1Points'],df['col2Points'],df['col3Points']=\
pd.cut(df.col1, [0,20,40],labels=[2,3])\... | 2020-07-30T21:57:02.090000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rank.html | pandas.DataFrame.rank#
pandas.DataFrame.rank#
DataFrame.rank(axis=0, method='average', numeric_only=_NoDefault.no_default, na_option='keep', ascending=True, pct=False)[source]#
Compute numerical data ranks (1 through n) along axis.
By default, equal values are assigned a rank that is the average of the
ranks of thos... | 414 | 1,026 | Change column values into rating and sum
Change the column values and sum the row according to conditions.
d = {'col1': [20, 40], 'col2': [30, 40],'col3':[200,300}
df = pd.DataFrame(data=d)
col1 col2 col3
0 20 30 200
1 40 40 300
Col4 shoud give back the sum of the row after the values ... |
60,569,207 | Using join on a dictionary of dataframes by datetime | <p>I have a dictionary of dataframes which have two columns 'Time' (datetimeformat) and another column which is different for each dataframe. The Time/Value entries are variable.</p>
<p>I want to join all of the dataframes to a master time dataframe which has 1 minute increments for the entire time range using the 'T... | 60,569,419 | 2020-03-06T17:48:45.797000 | 1 | null | -1 | 26 | python|pandas | <p>Change your for loop to </p>
<pre><code>for tag in tags:
df_man_data = df_man_data.join(df_dic[tag].set_index('Time'), on = 'Time',how = 'left')
</code></pre>
<p>.join() returns a new dataframe and assigning that new, joined dataframe to df_man_data each loop should capture all of your new columns of data iter... | 2020-03-06T18:05:30.197000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_dict.html | pandas.DataFrame.to_dict#
pandas.DataFrame.to_dict#
DataFrame.to_dict(orient='dict', into=<class 'dict'>)[source]#
Convert the DataFrame to a dictionary.
The type of the key-value pairs can be customized with the parameters
(see below).
Parameters
orientstr {‘dict’, ‘list’, ‘series’, ‘split’, ‘tight’, ‘records’, ‘... | 878 | 1,170 | Using join on a dictionary of dataframes by datetime
I have a dictionary of dataframes which have two columns 'Time' (datetimeformat) and another column which is different for each dataframe. The Time/Value entries are variable.
I want to join all of the dataframes to a master time dataframe which has 1 minute increme... |
61,141,992 | Create subindices based on two categorical variables | <p>I have a dataframe containing two categorical variables. I would like to add a third column with ascending indices for each of the categories, where one category is nested within the other.</p>
<p>Example:</p>
<pre><code>import pandas as pd
foo = ['a','a','a','a','b','b','b','b']
bar = [0,0,1,1,0,0,1,1]
df = pd.D... | 61,142,516 | 2020-04-10T14:05:52.473000 | 1 | null | 1 | 29 | python|pandas | <p>IIUC:</p>
<pre><code>s = df.groupby(['foo','bar']).cumcount()
df['foobar'] = df['foo'].factorize()[0] * (s.max() + 1) + s
</code></pre>
<p>Output:</p>
<pre><code> foo bar foobar
0 a 0 0
1 a 0 1
2 a 1 0
3 a 1 1
4 b 0 2
5 b 0 3
6 b 1 2... | 2020-04-10T14:34:48.710000 | 0 | https://pandas.pydata.org/docs/user_guide/advanced.html | MultiIndex / advanced indexing#
IIUC:
s = df.groupby(['foo','bar']).cumcount()
df['foobar'] = df['foo'].factorize()[0] * (s.max() + 1) + s
Output:
foo bar foobar
0 a 0 0
1 a 0 1
2 a 1 0
3 a 1 1
4 b 0 2
5 b 0 3
6 b 1 2
7 b 1 3... | 33 | 321 | Create subindices based on two categorical variables
I have a dataframe containing two categorical variables. I would like to add a third column with ascending indices for each of the categories, where one category is nested within the other.
Example:
import pandas as pd
foo = ['a','a','a','a','b','b','b','b']
bar = [... |
65,954,888 | Get the list of unique elements from multiple list and count of unique elements-column as list in data frame | <p>I have a dataset which looks something like this :</p>
<pre><code>df = pd.DataFrame()
df['home']=[['us','uk','argentina'],
['denmark','china'],
'',
'',
['australia','protugal','chile','russia'],
['turkey']]
df["away"] = [['us','me... | 65,955,314 | 2021-01-29T12:54:57.520000 | 1 | null | 0 | 32 | python|pandas | <p>I took the liberty of renaming the third column to unique country, as <code>row.unique</code> is already taken.</p>
<pre><code>df["unique_country"]=df.apply(lambda row: list(set((row.home if row.home else []) + (row.away if row.away else []))) , axis=1)
df["count_unique"]=df.apply(lambda row: len... | 2021-01-29T13:22:59.483000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html | pandas.DataFrame.explode#
pandas.DataFrame.explode#
DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
I took the liberty of renaming the third column to unique country, as row.unique is already taken.
df["unique_country"]=df.apply(lambda... | 185 | 483 | Get the list of unique elements from multiple list and count of unique elements-column as list in data frame
I have a dataset which looks something like this :
df = pd.DataFrame()
df['home']=[['us','uk','argentina'],
['denmark','china'],
'',
'',
['austra... |
64,829,590 | "Reading 2 csv files with pandas, using a value in one file to look up other values in the second fi(...TRUNCATED) | "<p>I have 2 txt files being read by Pandas.</p>\n<p>The first file contains:</p>\n<pre><code>code (...TRUNCATED) | 64,846,053 | 2020-11-14T00:05:55.210000 | 1 | null | 1 | 34 | python|pandas | "<p>What you want is a merge:</p>\n<pre><code>dataset = pd.read_csv('firstFile.csv', sep='\\s+')\ndf(...TRUNCATED) | 2020-11-15T15:11:25.030000 | 0 | https://pandas.pydata.org/docs/user_guide/io.html | "IO tools (text, CSV, HDF5, …)#\n\nIO tools (text, CSV, HDF5, …)#\nThe pandas I/O API is a set o(...TRUNCATED) | 495 | 861 | "Reading 2 csv files with pandas, using a value in one file to look up other values in the second fi(...TRUNCATED) |
61,189,774 | How to transform numbers in data-frame column to comma separated | "<p>i am working with python , so in my dataframe i have a column named <code>Company Profit</code> (...TRUNCATED) | 61,190,028 | 2020-04-13T14:14:17.007000 | 2 | null | 1 | 290 | python|pandas | "<p>Something like this will work:</p>\n\n<pre><code>In [601]: def thousand_separator(val): \n .(...TRUNCATED) | 2020-04-13T14:27:58.500000 | 0 | https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html | "pandas.read_csv#\n\npandas.read_csv#\n\n\npandas.read_csv(filepath_or_buffer, *, sep=_NoDefault.no_(...TRUNCATED) | 1,025 | 1,485 | "How to transform numbers in data-frame column to comma separated\ni am working with python , so in (...TRUNCATED) |
64,659,356 | Pandas, most efficient way to apply a two functions on entire row | "<p>I have the following DataFrame:</p>\n<pre><code> Date Label (...TRUNCATED) | 64,659,496 | 2020-11-03T08:41:30.783000 | 1 | null | 0 | 37 | python|pandas | "<p>You need to pass in the rows to the apply-function. Try this:</p>\n<pre><code>def scorer(row):\n(...TRUNCATED) | 2020-11-03T08:50:49.767000 | 0 | https://pandas.pydata.org/docs/user_guide/groupby.html | "Group by: split-apply-combine#\n\nGroup by: split-apply-combine#\nBy “group by” we are referrin(...TRUNCATED) | 757 | 1,073 | "Pandas, most efficient way to apply a two functions on entire row\nI have the following DataFrame:\(...TRUNCATED) |
68,924,396 | pandas series row-wise comparison (preserve cardinality/indices of larger series) | "<p>I have two pandas series, both string dtypes.</p>\n<ol>\n<li><p>reports['corpus'] has 1287 rows<(...TRUNCATED) | 68,924,611 | 2021-08-25T14:04:15.793000 | 1 | null | 1 | 38 | python|pandas | "<p>Convert <code>uniq_labels</code> column from the <code>labels</code> dataframe to a list, and sp(...TRUNCATED) | 2021-08-25T14:18:21.953000 | 0 | https://pandas.pydata.org/docs/user_guide/scale.html | "Scaling to large datasets#\n\nScaling to large datasets#\npandas provides data structures for in-me(...TRUNCATED) | 226 | 681 | "pandas series row-wise comparison (preserve cardinality/indices of larger series)\nI have two panda(...TRUNCATED) |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 16