Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'check_char_repetition_criteria', 'check_flagged_words_criteria', 'check_stop_word_ratio_criteria'})
This happened while the json dataset builder was generating data using
hf://datasets/CarperAI/pile-v2-local-dedup-small/data/ASFPublicMail_ver2/data.json (at revision eef37e4714df72c84fa26dd1fa6877bf75224706)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
text: string
meta: string
id: string
check_char_repetition_criteria: double
check_flagged_words_criteria: double
check_stop_word_ratio_criteria: double
to
{'id': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'meta': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'check_char_repetition_criteria', 'check_flagged_words_criteria', 'check_stop_word_ratio_criteria'})
This happened while the json dataset builder was generating data using
hf://datasets/CarperAI/pile-v2-local-dedup-small/data/ASFPublicMail_ver2/data.json (at revision eef37e4714df72c84fa26dd1fa6877bf75224706)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id string | text string | meta string |
|---|---|---|
5111 | """
# Getting Started
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import tensorflow as tf
data = pd.read_csv('../input/all-space-... | {'source': 'AI4Code', 'id': '09775e224936f6'} |
36911 | """
# 1- Linear Regression
"""
#Imports
import torch
import torch.nn as nn
import numpy as np
from sklearn import datasets
import matplotlib.pyplot as plt
# Data prep
X_numpy, y_numpy = datasets.make_regression(n_samples=100, n_features=1, noise=20, random_state=4)
# cast to float Tensor
X = torch.from_numpy(X_numpy.a... | {'source': 'AI4Code', 'id': '43f52404cd99c9'} |
82907 | """
#### This is fork of https://www.kaggle.com/code1110/janestreet-faster-inference-by-xgb-with-treelite beautifull notebook on how to make faster prediction with xgb!! <br>
#### I'm using PurgedGroupTimeSeriesSplit for validation with multitarget.
"""
"""
# Install treelite
"""
!pip --quiet install ../input/treelite... | {'source': 'AI4Code', 'id': '983be5f1810ce2'} |
69916 | import os
import pandas as pd
import numpy as np
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
import matplotlib.pyplot as plt
import seaborn as sns
import contextily as ctx
from mpl_toolkits.basemap import Basemap
"""
Hello!
I am ... | {'source': 'AI4Code', 'id': '80983462780669'} |
26829 | """
**INTRODUCTION TO PYTHON: I WILL SHARE MY OWN EXPERIENCE HERE TO LEARN TOGETHER AND TEACH OTHER POEPLE.
**
"""
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's sever... | {'source': 'AI4Code', 'id': '315effe6373b56'} |
974 | import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
%matplotlib inline
import matplotlib.pyplot as plt # visualization
!pip install seaborn as sns -q # visualization with seaborn v0.11.1
import seaborn as sns # visualization
import missingno as msno # missing value... | {'source': 'AI4Code', 'id': '01d759dd91e914'} |
16372 | # This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.... | {'source': 'AI4Code', 'id': '1dd8952759046b'} |
49705 | from IPython.core.display import display, HTML, Javascript
html_contents = """
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Raleway">
<link rel="stylesheet"... | {'source': 'AI4Code', 'id': '5b7abb6c254593'} |
88895 | """
# Exploring Trending Youtube Video Statistics for the U.S.
Growing up watching YouTube shaped a lot of my interests and humor. I still remember the early days when nigahiga's How To Be Gangster and ALL YOUR BASE ARE BELONG TO US was peak comedy. So I thought it would be fun to see the state of YouTube and what's p... | {'source': 'AI4Code', 'id': 'a3082f04cec23e'} |
47518 | import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import datetime
from sklearn.metrics import mean_squared_log_error
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
"""... | {'source': 'AI4Code', 'id': '578c6c4a770d5c'} |
1494 | """
### Problem Description :
A retail company “ABC Private Limited” wants to understand the customer purchase behaviour(specifically, purchase amount)
against various products of different categories. They have shared purchase summary of various customers for selected
high volume products from last month.
Th... | {'source': 'AI4Code', 'id': '02c7e612bbc663'} |
126841 | # This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.... | {'source': 'AI4Code', 'id': 'e93dff927f55bc'} |
74395 | """
# About H2O
Machine Learning PLatform used in here is H2O, which is a Fast, Scalable, Open source application for machine/deep learning.
Big names such as PayPal, Booking.com, Cisco are using H2O as the ML platform.
The speciality of h2o is that it is using in-memory compression to handles billions of data rows in... | {'source': 'AI4Code', 'id': '88d612b5d09e0f'} |
39953 | """
# **Project Objective and Brief**
## *In this project, rule-based and Deep-Learning algorithms are used with an aim to first appropriately detect different type of emotions contained in a collection of Tweets and then accurately predict the overall emotions of the Tweets is done.*
"""
"""
## **Preprocessor is a pr... | {'source': 'AI4Code', 'id': '4990ea5b1acd90'} |
2655 | """
**Some Cooking Ideas for Tonight**
* The idea is to create some new recipes when people are looking for something to eat at home
* Build some ingredients set for each cuisine and randomly choose the ingredients
"""
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by... | {'source': 'AI4Code', 'id': '0511fc218c4b1c'} |
127502 | # This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O... | {'source': 'AI4Code', 'id': 'ea83b6cd05caf6'} |
126651 | ! conda install -y hvplot=0.5.2 bokeh==1.4.0
! conda install -y -c conda-forge sklearn-contrib-py-earth
"""
# Global Surrogates Models
Many classes of models can be difficult to explain. For Tree Ensembles, while it may be easy to describe the rationale for a single trees outputs, it may be much harder to describe how ... | {'source': 'AI4Code', 'id': 'e8ebf31aa52f0e'} |
109814 | """
# **A/B TESTING**
**What is A/B Testing?**
The A/B test is a hypothesis test in which two-sample user experiences are tested. In other words, A/B testing is a way to compare two versions of a single variable, typically by testing a subject's response to variant A against variant B, and determining which of the ... | {'source': 'AI4Code', 'id': 'c9ce8f2cf6c544'} |
End of preview.
No dataset card yet
- Downloads last month
- 15