markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Pandas est le package de prédilection pour traiter des données structurées.Pandas est basé sur 2 structures extrêmement liées les Series et le DataFrame.Ces deux structures permettent de traiter des données sous forme de tableaux indexés.Les classes de Pandas utilisent des classes de Numpy,... | # on importe pandas avec :
import pandas as pd
import numpy as np
%matplotlib inline | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les Series de Pandas- Les Series sont indexées, c'est leur avantage sur les arrays de NumPy- On peut utiliser les fonctions `.values` et `.index` pour voir les différentes parties de chaque Series- On définit une Series par `pd.Series([,], index=['','',])`- On peut appeler un élément avec `ma_serie['France']`- On peut... | ser_pop = pd.Series([70,8,300,1200],index=["France","Suisse","USA","Chine"])
ser_pop
# on extrait une valeur avec une clé
ser_pop["France"]
# on peut aussi utiliser une position avec .iloc[]
ser_pop.iloc[0]
# on applique la condition entre []
ser_pop[ser_pop>50] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
D'autres opérations sur les objets series- Pour définir le nom de la Series, on utilise `.name`- Pour définir le titre de la colonne des observations, on utilise `.index.name` **Exercice :** Définir les noms de l’objet et de la colonne des pays pour la Series précédente | ser_pop.name = "Populations"
ser_pop.index.name = "Pays"
ser_pop | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les données manquantesDans pandas, les données manquantes sont identifiés avec les fonctions de Numpy (`np.nan`). On a d'autres fonctions telles que : | pd.Series([2,np.nan,4],index=['a','b','c'])
pd.isna(pd.Series([2,np.nan,4],index=['a','b','c']))
pd.notna(pd.Series([2,np.nan,4],index=['a','b','c'])) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les dates avec pandas- Python possède un module datetime qui permet de gérer facilement des dates- Pandas permet d'appliquer les opérations sur les dates aux Series et aux DataFrame- Le format es dates Python est `YYYY-MM-DD HH:MM:SS`- On peut générer des dates avec la fonction `pd.date_range()` avec différente fréque... | dates = pd.date_range("2017-10-03", "2020-02-27",freq="W")
valeurs = np.random.random(size=len(dates))
ma_serie=pd.Series(valeurs, index =dates)
ma_serie.plot()
len(dates) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Le DataFrame - Les DataFrame sont des objets très souples pouvant être construits de différentes façon- On peut les construire en récupérant des données copier / coller, où directement sur Internet, ou en entrant les valeurs manuellement- Les DataFrame se rapprochent des dictionnaires et on peut construire ces objets ... | frame1=pd.DataFrame(np.random.randn(10).reshape(5,2),
index=["obs_"+str(i) for i in range(5)],
columns=["col_"+str(i) for i in range(2)])
frame1 | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Opérations sur les DataFrameOn peut afficher le nom des colonnes : | print(frame1.columns) | Index(['col_0', 'col_1'], dtype='object')
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On peut accéder à une colonne avec :- `frame1.col_0` : attention au cas de nom de colonnes avec des espaces...- `frame1['col_0']`On peut accéder à une cellule avec :- `frame1.loc['obs1','col_0']` : on utilise les index et le nom des colonnes- `frame1.iloc[1,0]` : on utilise les positions dans le DataFrame Options de v... | frame1.head(3) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Pour afficher un résumé du DF : | frame1.info() | <class 'pandas.core.frame.DataFrame'>
Index: 5 entries, obs_0 to obs_4
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 5 non-null float64
1 col_1 5 non-null float64
dtypes: float64(2)
memory usage: 120.0+ bytes
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer des données externesPandas est l'outil le plus efficace pour importer des données externes, il prend en charge de nombreux formats dont csv, Excel, SQL, SAS... Importation de données avec PandasQuel que soit le type de fichier, Pandas possède une fonction :```pythonframe=pd.read_...('chemin_du_fichier/nom_du_... | # on prend la colonne id comme index de notre DataFrame
airbnb = pd.read_csv("https://www.stat4decision.com/airbnb.csv",index_col="id")
airbnb.info()
# la colonne price est sous forme d'objet et donc de chaîne de caractères
# on a 2933 locations qui coûtent 80$ la nuit
airbnb["price"].value_counts()
dpt = pd.read_csv("... | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1300 entries, 0 to 1299
Data columns (total 38 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CODGEO 1300 non-null int64
1 LIBGEO 1300 non-null object
2 REG 1300 non-null i... | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
D'autres types de données JSONLes objets JSON ressemblent à des dictionnaires.On utilise le module `json` puis la fonction `json.loads()` pour transformer une entrée JSON en objet json HTMLOn utilise `pd.read_html(url)`. Cet fonction est basée sur les packages `beautifulsoup` et `html5lib`Cette fonction renvoie une li... | bank = pd.read_html("http://www.fdic.gov/bank/individual/failed/banklist.html")
# read_html() stocke les tableaux d'une page web dans une liste
type(bank)
len(bank)
bank[0].head(10)
nba = pd.read_html("https://en.wikipedia.org/wiki/2018%E2%80%9319_NBA_season")
len(nba)
nba[3] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer depuis ExcelOn a deux approches pour Excel :- On peut utiliser `pd.read_excel()`- On peut utiliser la classe `pd.ExcelFile()`Dans ce cas, on utilise :```pythonxlsfile=pd.ExcelFile('fichier.xlsx')xlsfile.parse('Sheet1')``` **Exercice :** Importez un fichier Excel avec les deux approches, on utilisera : `cr... | pd.read_excel("./data/credit2.xlsx",usecols=["Age","Gender"])
pd.read_excel("./data/credit2.xlsx",usecols="A:C")
credit2 = pd.read_excel("./data/credit2.xlsx", index_col="Customer_ID")
credit2.head()
# on crée un objet du type ExcelFile
ville = pd.ExcelFile("./data/ville.xls")
ville.sheet_names
# on extrait toutes les ... | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On crée une fonction qui permet d'importer les feuilles excel ayant le terme nom_dans_feuille dans le nom de la feuille | def import_excel_feuille(chemin_fichier, nom_dans_feuille = ""):
""" fonction qui importe les feuilles excel ayant le terme nom_dans_feuille dans le nom de la feuille"""
excel = pd.ExcelFile(chemin_fichier)
list_feuilles = []
for nom_feuille in excel.sheet_names:
if nom_dans_feuille in nom_... | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer des données SQLPandas possède une fonction `read_sql()` qui permet d’importer directement des bases de données ou des queries dans des DataFrameIl faut tout de même un connecteur pour accéder aux bases de donnéesPour mettre en place ce connecteur, on utlise le package SQLAlchemy.Suivant le type de base de don... | # on importe l'outil de connexion
from sqlalchemy import create_engine | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On crée une connexion```pythonconnexion=create_engine("sqlite:///(...).sqlite")``` On utlise une des fonctions de Pandas pour charger les données```pythonrequete="""select ... from ..."""frame_sql=pd.read_sql_query(requete,connexion)``` **Exercices :** Importez la base de données SQLite salaries et récupérez la tabl... | connexion=create_engine("sqlite:///./data/salaries.sqlite")
connexion.table_names()
salaries = pd.read_sql_query("select * from salaries", con=connexion)
salaries.head() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Importer depuis SPSSPandas possède une fonction `pd.read_spss()`Attention ! Il faut la dernière version de Pandas et installer des packages supplémentaires !**Exercice :** Importer le fichier SPSS se trouvant dans ./data/ | #base = pd.read_spss("./data/Base.sav") | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les tris avec Pandas Pour effectuer des tris, on utilise :- `.sort_index()` pour le tri des index- `.sort_values()` pour le tri des données- `.rank()` affiche le rang des observationsIl peut y avoir plusieurs tris dans la même opération. Dans ce cas, on utilise des listes de colonnes :```pythonframe.sort_values(["col_... | salaries.sort_values(["JobTitle","TotalPay"],ascending=[True, False]) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les statistiques simplesLes Dataframe possèdent de nombreuses méthodes pour calculer des statistiques simples :- `.sum(axis=0)` permet de faire une somme par colonne- `.sum(axis=1)` permet de faire une somme par ligne- `.min()` et `.max()` donnent le minimum par colonne- `.idxmin()` et `.idxmax()` donnent l’index du m... | # cette colonne est sous forme d'object, il va falloir la modifier
airbnb["price"].dtype
airbnb["price_num"] = pd.to_numeric(airbnb["price"].str.replace("$","")
.str.replace(",",""))
airbnb["price_num"].dtype
airbnb["price_num"].mean()
airbnb["price_num"].describe()
# on extrait l'id ... | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Calcul de la moyenne pondérée sur une enquête | base = pd.read_csv("./data/Base.csv")
#moyenne pondérée
np.average(base["resp_age"],weights=base["Weight"])
# moyenne
base["resp_age"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Utilisation de statsmodels | from statsmodels.stats.weightstats import DescrStatsW
# on sélectionne les colonnes numériques
base_num = base.select_dtypes(np.number)
# on calcule les stats desc pondérées
mes_stat = DescrStatsW(base_num, weights=base["Weight"])
base_num.columns
mes_stat.var
mes_stat_age = DescrStatsW(base["resp_age"], weights=base["... | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On va construire une fonction permettant de calculer les stat desc pondérées d'une colonne | def stat_desc_w_ipsos(data, columns, weights):
""" Cette fonction calcule et affiche les moyennes et écarts-types pondérés
Input : - data : données sous forme de DataFrame
- columns : nom des colonnes quanti à analyser
- weights : nom de la colonne des poids
"""
from st... | Moyenne pondérée : 48.40297631233564
Ecart-type pondéré : 17.1309963999935
| MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Le traitement des données manquantes- Les données manquantes sont identifiées par `NaN`- `.dropna()` permet de retirer les données manquantes dans un objet Series et l’ensemble d’une ligne dans le cas d’un DataFrame- Pour éliminer par colonne, on utilise `.dropna(axis=1)`- Remplacer toutes les données manquantes `.fil... | credit1 = pd.read_csv("./data/credit1.txt",sep="\t")
credit_global = pd.merge(credit1,credit2,how="inner",on="Customer_ID")
credit_global.head() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On fait une jointure entre les données des locations Airbnb et les données de calendrier de remplissage des appartements | airbnb_reduit = airbnb[["price_num","latitude","longitude"]]
calendar = pd.read_csv("https://www.stat4decision.com/calendar.csv.gz")
calendar.head()
new_airbnb = pd.merge(calendar,airbnb[["price_num","latitude","longitude"]],
left_on = "listing_id",right_index=True)
new_airbnb.shape | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On veut extraire des statistiques de basePar exemple, la moyenne des prix pour les locations du 8 juillet 2018 : | new_airbnb[new_airbnb["date"]=='2018-07-08']["price_num"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On extrait le nombre de nuitées disponibles / occuppées : | new_airbnb["available"].value_counts(normalize = True) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Si on regarde le part de locations occuppées le 8 janvier 2019, on a : | new_airbnb[new_airbnb["date"]=='2019-01-08']["available"].value_counts(normalize = True) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
La moyenne des prix des appartements disponibles le 8 juillet 2018 : | new_airbnb[(new_airbnb["date"]=='2018-07-08')&(new_airbnb["available"]=='t')]["price_num"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On transforme la colonne date qui est sous forme de chaîne de caractère en DateTime, ceci permet de faire de nouvelles opérations : | new_airbnb["date"]= pd.to_datetime(new_airbnb["date"])
# on construit une colonne avec le jour de la semaine
new_airbnb["jour_semaine"]=new_airbnb["date"].dt.day_name() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
La moyenne des pris des Samedi soirs disponibles est donc : | new_airbnb[(new_airbnb["jour_semaine"]=='Saturday')&(new_airbnb["available"]=='t')]["price_num"].mean() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Gestion des duplications- On utilise `.duplicated()` ou `.drop_duplicates()` dans le cas où on désire effacer les lignes se répétant- On peut se concentrer sur une seule variables en entrant directement le nom de la variable. Dans ce cas, c’est la première apparition qui compte. Si on veut prendre la dernière appariti... | airbnb["price_disc1"]=pd.cut(airbnb["price_num"],bins=5)
airbnb["price_disc2"]=pd.qcut(airbnb["price_num"],5)
airbnb["price_disc1"].value_counts()
airbnb["price_disc2"].value_counts() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Les tableaux croisés avec PandasLes DataFrame possèdent des méthodes pour générer des tableaux croisés, notamment :```pythonframe1.pivot_table()```Cette méthode permet de gérer de nombreux cas avec des fonctions standards et sur mesure. **Exercice :** Afficher un tableau Pivot pour les données AirBnB. | # on définit un
def moy2(x):
return x.mean()/x.var() | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On croise le room_type avec le niveau de prix et on regarde le review_score_rating moyen + le nombre d'occurences et une fonction "maison" : | airbnb['room_type']
airbnb['price_disc2']
airbnb['review_scores_rating']
airbnb.pivot_table(values=["review_scores_rating",'review_scores_cleanliness'],
index="room_type",
columns='price_disc2',aggfunc=["count","mean",moy2]) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
L'utilisation de GroupBy sur des DataFrame- `.groupby` permet de rassembler des observations en fonction d’une variable dite de groupe- Par exemple, `frame.groupby('X').mean()` donnera les moyennes par groupes de `X`- On peut aussi utiliser `.size()` pour connaître la taille des groupes et utiliser d’autres fonctions ... | airbnb_group_room = airbnb.groupby(['room_type','price_disc2'])
airbnb_group_room["price_num"].describe()
# on peut afficher plusieurs statistiques
airbnb_group_room["price_num"].agg(["mean","median","std","count"])
new_airbnb.groupby(['available','jour_semaine'])["price_num"].agg(["mean","count"]) | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Essayez d'utiliser une fonction lambda sur le groupby **Exercice :** - Données sur les salaires- On utilise le `groupby()` pour rassembler les types d’emploi- Et on calcule des statistiques pour chaque typeOn peut utiliser la méthode `.agg()` avec par exemple `'mean'` comme paramètreOn utilise aussi fréquemment la m... | # on passe tous les JobTitle en minuscule
salaries["JobTitle"]= salaries["JobTitle"].str.lower()
# nombre de JobTitle différents
salaries["JobTitle"].nunique()
salaries.groupby("JobTitle")["TotalPay"].mean().sort_values(ascending=False)
salaries.groupby("JobTitle")["TotalPay"].agg(["mean","count"]).sort_values("count",... | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
On peut aussi faire des représentations graphiques avancées : | import matplotlib.pyplot as plt
plt.figure(figsize=(10,5))
plt.scatter("longitude","latitude", data = airbnb[airbnb["price_num"]<150], s=1,c = "price_num", cmap=plt.get_cmap("jet"))
plt.colorbar()
plt.savefig("paris_airbnb.jpg")
airbnb[airbnb["price_num"]<150] | _____no_output_____ | MIT | 05_pandas.ipynb | stat4decision/python-data-ipsos-fev20 |
Linear Regression with a Real DatasetThis Colab uses a real dataset to predict the prices of houses in California. Learning Objectives:After doing this Colab, you'll know how to do the following: * Read a .csv file into a [pandas](https://developers.google.com/machine-learning/glossary/pandas) DataFrame. * Exami... | #@title Run on TensorFlow 2.x
%tensorflow_version 2.x | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Import relevant modulesThe following hidden code cell imports the necessary code to run the code in the rest of this Colaboratory. | #@title Import relevant modules
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
# The following lines adjust the granularity of reporting.
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
The datasetDatasets are often stored on disk or at a URL in [.csv format](https://wikipedia.org/wiki/Comma-separated_values). A well-formed .csv file contains column names in the first row, followed by many rows of data. A comma divides each value in each row. For example, here are the first five rows of the .csv fil... | # Import the dataset.
training_df = pd.read_csv(filepath_or_buffer="https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv")
# Scale the label.
training_df["median_house_value"] /= 1000.0
# Print the first rows of the pandas DataFrame.
training_df.head() | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Scaling `median_house_value` puts the value of each house in units of thousands. Scaling will keep loss values and learning rates in a friendlier range. Although scaling a label is usually *not* essential, scaling features in a multi-feature model usually *is* essential. Examine the datasetA large part of most machin... | # Get statistics on the dataset.
training_df.describe()
| _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 1: Identify anomalies in the datasetDo you see any anomalies (strange values) in the data? | #@title Double-click to view a possible answer.
# The maximum value (max) of several columns seems very
# high compared to the other quantiles. For example,
# example the total_rooms column. Given the quantile
# values (25%, 50%, and 75%), you might expect the
# max value of total_rooms to be approximately
# 5,000 o... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Define functions that build and train a modelThe following code defines two functions: * `build_model(my_learning_rate)`, which builds a randomly-initialized model. * `train_model(model, feature, label, epochs)`, which trains the model from the examples (feature and label) you pass. Since you don't need to understan... | #@title Define the functions that build and train a model
def build_model(my_learning_rate):
"""Create and compile a simple linear regression model."""
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Describe the topography of the model.
# The topography of a simple linea... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Define plotting functionsThe following [matplotlib](https://developers.google.com/machine-learning/glossary/matplotlib) functions create the following plots:* a scatter plot of the feature vs. the label, and a line showing the output of the trained model* a loss curveYou may optionally double-click the headline to s... | #@title Define the plotting functions
def plot_the_model(trained_weight, trained_bias, feature, label):
"""Plot the trained model against 200 random training examples."""
# Label the axes.
plt.xlabel(feature)
plt.ylabel(label)
# Create a scatter plot from 200 random points of the dataset.
random_examples ... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Call the model functionsAn important part of machine learning is determining which [features](https://developers.google.com/machine-learning/glossary/feature) correlate with the [label](https://developers.google.com/machine-learning/glossary/label). For example, real-life home-value prediction models typically rely on... | # The following variables are the hyperparameters.
learning_rate = 0.01
epochs = 30
batch_size = 30
# Specify the feature and the label.
my_feature = "total_rooms" # the total number of rooms on a specific city block.
my_label="median_house_value" # the median value of a house on a specific city block.
# That is, you... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
A certain amount of randomness plays into training a model. Consequently, you'll get different results each time you train the model. That said, given the dataset and the hyperparameters, the trained model will generally do a poor job describing the feature's relation to the label. Use the model to make predictionsYou... | def predict_house_values(n, feature, label):
"""Predict house values based on a feature."""
batch = training_df[feature][10000:10000 + n]
predicted_values = my_model.predict_on_batch(x=batch)
print("feature label predicted")
print(" value value value")
print(" in thousand$ ... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Now, invoke the house prediction function on 10 examples: | predict_house_values(10, my_feature, my_label) | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 2: Judge the predictive power of the modelLook at the preceding table. How close is the predicted value to the label value? In other words, does your model accurately predict house values? | #@title Double-click to view the answer.
# Most of the predicted values differ significantly
# from the label value, so the trained model probably
# doesn't have much predictive power. However, the
# first 10 examples might not be representative of
# the rest of the examples. | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 3: Try a different featureThe `total_rooms` feature had only a little predictive power. Would a different feature have greater predictive power? Try using `population` as the feature instead of `total_rooms`. Note: When you change features, you might also need to change the hyperparameters. | my_feature = "?" # Replace the ? with population or possibly
# a different column name.
# Experiment with the hyperparameters.
learning_rate = 2
epochs = 3
batch_size = 120
# Don't change anything below this line.
my_model = build_model(learning_rate)
weight, bias, epochs, rmse = train_model(my_m... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Did `population` produce better predictions than `total_rooms`? | #@title Double-click to view the answer.
# Training is not entirely deterministic, but population
# typically converges at a slightly higher RMSE than
# total_rooms. So, population appears to be about
# the same or slightly worse at making predictions
# than total_rooms. | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Task 4: Define a synthetic featureYou have determined that `total_rooms` and `population` were not useful features. That is, neither the total number of rooms in a neighborhood nor the neighborhood's population successfully predicted the median house price of that neighborhood. Perhaps though, the *ratio* of `total_r... | # Define a synthetic feature named rooms_per_person
training_df["rooms_per_person"] = ? # write your code here.
# Don't change the next line.
my_feature = "rooms_per_person"
# Assign values to these three hyperparameters.
learning_rate = ?
epochs = ?
batch_size = ?
# Don't change anything below this line.
my_model =... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Based on the loss values, this synthetic feature produces a better model than the individual features you tried in Task 2 and Task 3. However, the model still isn't creating great predictions. Task 5. Find feature(s) whose raw values correlate with the labelSo far, we've relied on trial-and-error to identify possible ... | # Generate a correlation matrix.
training_df.corr() | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
The correlation matrix shows nine potential features (including a syntheticfeature) and one label (`median_house_value`). A strong negative correlation or strong positive correlation with the label suggests a potentially good feature. **Your Task:** Determine which of the nine potential features appears to be the bes... | #@title Double-click here for the solution to Task 5
# The `median_income` correlates 0.7 with the label
# (median_house_value), so median_income` might be a
# good feature. The other seven potential features
# all have a correlation relatively close to 0.
# If time permits, try median_income as the feature
# and ... | _____no_output_____ | Apache-2.0 | ml/cc/exercises/linear_regression_with_a_real_dataset.ipynb | lc0/eng-edu |
Analisis de O3 y SO2 arduair vs estacion universidad pontificia bolivarianaSe compararon los resultados generados por el equipo arduair y la estacion de calidad de aire propiedad de la universidad pontificia bolivariana seccional bucaramangaCabe resaltar que durante la ejecucion de las pruebas, el se sospechaba equipo... | import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
import xlrd
%matplotlib inline
pd.options.mode.chained_assignment = None | _____no_output_____ | MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Estudios de correlacionSe realizaron graficos de correlacion para el ozono y el dioxido de azufre con la estacion de referencia.Tambien se comparo los datos crudos arrojados por el sensor de ozono, con las ecuaciones de calibracion propuesta por el [datasheet](https://www.terraelectronica.ru/%2Fds%2Fpdf%2FM%2Fmq131-lo... | #Arduair prototype data
dfArd=pd.read_csv('DATA.TXT',names=['year','month','day','hour','minute','second','hum','temp','pr','l','co','so2','no2','o3','pm10','pm25','void'])
#Dates to datetime
dates=dfArd[['year','month','day','hour','minute','second']]
dates['year']=dates['year'].add(2000)
dates['minute']=dates['minute... | C:\Users\fega0\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
| MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Defino algunas funciones de ayuda | def polyfitEq(x,y):
C= np.polyfit(x,y,1)
m=C[0]
b=C[1]
return 'y = x*{} + {}'.format(m,b)
def calibrate(x,y):
C= np.polyfit(x,y,1)
m=C[0]
b=C[1]
return x*m+b
def rename_labels(obj,unit):
obj.columns=obj.columns.map(lambda x: x.replace('2',' stc_cdmb'))
obj.columns=obj.columns.map... |
Ozono promedio 1h, sin procesar
y = x*-7.386462397051218 + 735.7745124254552
Promedio 3h
y = x*3.9667587988316875 + 471.89151081632417
| MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Datasheets calibrados | df2['o3']=calibrate(df2['o3'],df2['ozone_UPB'])
df2.plot(figsize=[15,5])
df3['so2']=calibrate(df3['so2'],df3['raw_so2'])
df3.plot(figsize=[15,5])
df2.head()
df2.columns = ['datetime', 'Ozono estación UPB [ppb]','Ozono prototipo [ppb]','rs','rs_ro','rs_ro_abs']
sns.jointplot(data=df2,x='Ozono prototipo [ppb]',y='Ozon... | C:\Users\fega0\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
| MIT | o3_so2_upb/estacion_upb_data_processing_03.ipynb | fega/arduair-calibration |
Accessing TerraClimate data with the Planetary Computer STAC API[TerraClimate](http://www.climatologylab.org/terraclimate.html) is a dataset of monthly climate and climatic water balance for global terrestrial surfaces from 1958-2019. These data provide important inputs for ecological and hydrological studies at globa... | import warnings
warnings.filterwarnings("ignore", "invalid value", RuntimeWarning) | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Data accesshttps://planetarycomputer.microsoft.com/api/stac/v1/collections/terraclimate is a STAC Collection with links to all the metadata about this dataset. We'll load it with [PySTAC](https://pystac.readthedocs.io/en/latest/). | import pystac
url = "https://planetarycomputer.microsoft.com/api/stac/v1/collections/terraclimate"
collection = pystac.read_file(url)
collection | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
The collection contains assets, which are links to the root of a Zarr store, which can be opened with xarray. | asset = collection.assets["zarr-https"]
asset
import fsspec
import xarray as xr
store = fsspec.get_mapper(asset.href)
ds = xr.open_zarr(store, **asset.extra_fields["xarray:open_kwargs"])
ds | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
We'll process the data in parallel using [Dask](https://dask.org). | from dask_gateway import GatewayCluster
cluster = GatewayCluster()
cluster.scale(16)
client = cluster.get_client()
print(cluster.dashboard_link) | https://pcc-staging.westeurope.cloudapp.azure.com/compute/services/dask-gateway/clusters/staging.5cae9b2b4c7d4f7fa37c5a4ac1e8112d/status
| MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
The link printed out above can be opened in a new tab or the [Dask labextension](https://github.com/dask/dask-labextension). See [Scale with Dask](https://planetarycomputer.microsoft.com/docs/quickstarts/scale-with-dask/) for more on using Dask, and how to access the Dashboard. Analyze and plot global temperatureWe can... | import cartopy.crs as ccrs
import matplotlib.pyplot as plt
average_max_temp = ds.isel(time=-1)["tmax"].coarsen(lat=8, lon=8).mean().load()
fig, ax = plt.subplots(figsize=(20, 10), subplot_kw=dict(projection=ccrs.Robinson()))
average_max_temp.plot(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines(); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Let's see how temperature has changed over the observational record, when averaged across the entire domain. Since we'll do some other calculations below we'll also add `.load()` to execute the command instead of specifying it lazily. Note that there are some data quality issues before 1965 so we'll start our analysis... | temperature = (
ds["tmax"].sel(time=slice("1965", None)).mean(dim=["lat", "lon"]).persist()
)
temperature.plot(figsize=(12, 6)); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
With all the seasonal fluctuations (from summer and winter) though, it can be hard to see any obvious trends. So let's try grouping by year and plotting that timeseries. | temperature.groupby("time.year").mean().plot(figsize=(12, 6)); | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Now the increase in temperature is obvious, even when averaged across the entire domain.Now, let's see how those changes are different in different parts of the world. And let's focus just on summer months in the northern hemisphere, when it's hottest. Let's take a climatological slice at the beginning of the period an... | %%time
import dask
summer_months = [6, 7, 8]
summer = ds.tmax.where(ds.time.dt.month.isin(summer_months), drop=True)
early_period = slice("1958-01-01", "1988-12-31")
late_period = slice("1988-01-01", "2018-12-31")
early, late = dask.compute(
summer.sel(time=early_period).mean(dim="time"),
summer.sel(time=lat... | _____no_output_____ | MIT | datasets/terraclimate/terraclimate-example.ipynb | ianthomas23/PlanetaryComputerExamples |
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**ディープラーンニングを利用したテキスト分類**_ Contents1. [事前準備](1.-事前準備)1. [自動機械学習 Automated Machine Learning](2.-自動機械学習-Automated-Machine-Learning)1. [結果の確認](3.-結果の確認) 1. 事前準備本デモンストレーションでは、AutoML の深層学習の機能を用いてテキストデータの分類モデ... | import logging
import os
import shutil
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.co... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
Azure ML Python SDK のバージョンが 1.8.0 以上になっていることを確認します。 | print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
1.2 Azure ML Workspace との接続 | ws = Workspace.from_config()
# 実験名の指定
experiment_name = 'livedoor-news-classification-BERT'
experiment = Experiment(ws, experiment_name)
output = {}
#output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output[... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
1.3 計算環境の準備BERT を利用するための GPU の `Compute Cluster` を準備します。 | from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Compute Cluster の名称
amlcompute_cluster_name = "gpucluster"
# クラスターの存在確認
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
except ComputeTargetException:
... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
1.4 学習データの準備今回は [livedoor New](https://www.rondhuit.com/download/ldcc-20140209.tar.gz) を学習データとして利用します。ニュースのカテゴリー分類のモデルを構築します。 | target_column_name = 'label' # カテゴリーの列
feature_column_name = 'text' # ニュース記事の列
train_dataset = Dataset.get_by_name(ws, "livedoor").keep_columns(["text","label"])
train_dataset.take(5).to_pandas_dataframe() | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
2. 自動機械学習 Automated Machine Learning 2.1 設定と制約条件 自動機械学習 Automated Machine Learning の設定と学習を行っていきます。 | from azureml.automl.core.featurization import FeaturizationConfig
featurization_config = FeaturizationConfig()
# テキストデータの言語を指定します。日本語の場合は "jpn" と指定します。
featurization_config = FeaturizationConfig(dataset_language="jpn") # 英語の場合は下記をコメントアウトしてください。
# 明示的に `text` の列がテキストデータであると指定します。
featurization_config.add_column_purpose(... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
2.2 モデル学習 自動機械学習 Automated Machine Learning によるモデル学習を開始します。 | automl_run = experiment.submit(automl_config, show_output=False)
# run_id を出力
automl_run.id
# Azure Machine Learning studio の URL を出力
automl_run
# # 途中でセッションが切れた場合の対処
# from azureml.train.automl.run import AutoMLRun
# ws = Workspace.from_config()
# experiment = ws.experiments['livedoor-news-classification-BERT']
# run... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
2.3 モデルの登録 | # 一番精度が高いモデルを抽出
best_run, fitted_model = automl_run.get_output()
# モデルファイル(.pkl) のダウンロード
model_dir = '../model'
best_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')
# Azure ML へモデル登録
model_name = 'livedoor-model'
model = Model.register(model_path = model_dir + '/model.pkl',
model... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
3. テストデータに対する予測値の出力 | from sklearn.externals import joblib
trained_model = joblib.load(model_dir + '/model.pkl')
trained_model
test_dataset = Dataset.get_by_name(ws, "livedoor").keep_columns(["text"])
predicted = trained_model.predict_proba(test_dataset.to_pandas_dataframe()) | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
4. モデルの解釈一番精度が良かったチャンピョンモデルを選択し、モデルの解釈をしていきます。 モデルに含まれるライブラリを予め Python 環境にインストールする必要があります。[automl_env.yml](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/automl_env.yml)を用いて、conda の仮想環境に必要なパッケージをインストールしてください。 | # 特徴量エンジニアリング後の変数名の確認
fitted_model.named_steps['datatransformer'].get_json_strs_for_engineered_feature_names()
#fitted_model.named_steps['datatransformer']. get_engineered_feature_names ()
# 特徴エンジニアリングのプロセスの可視化
text_transformations_used = []
for column_group in fitted_model.named_steps['datatransformer'].get_featurizat... | _____no_output_____ | MIT | notebooks/automl-classification-Force-text-dnn.ipynb | konabuta/AutoML-Pipeline |
process_autogluon_results- cleans up the dataframes a bit for the report setup | #@markdown add auto-Colab formatting with `IPython.display`
from IPython.display import HTML, display
# colab formatting
def set_css():
display(
HTML(
"""
<style>
pre {
white-space: pre-wrap;
}
</style>
"""
)
)
get_ipython().events.register("pre_run_cell", set_... | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
define folder for outputs | _out_dir_name = "Formatted-results-report" #@param {type:"string"}
output_path = os.path.join(path, _out_dir_name)
os.makedirs(output_path, exist_ok=True)
print(f"notebook outputs will be stored in:\n{output_path}")
_out = Path(output_path)
_src = Path(path) | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
load data MIT | data_dir = _src / "final-results"
csv_files = {f.stem:f for f in data_dir.iterdir() if f.is_file() and f.suffix=='.csv'}
print(csv_files)
mit_ag = pd.read_csv(csv_files['mitbih_autogluon_results'])
mit_ag.info()
mit_ag.sort_values(by='score_val', ascending=False, inplace=True)
mit_ag.head()
orig_cols = list(mit_ag.c... | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
save mit-gluon-reformat | mit_ag.to_csv(_out / "MITBIH_autogluon_baseline_results_Accuracy.csv", index=False) | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
PTB reformat | ptb_ag = pd.read_csv(csv_files['ptbdb_autogluon_results']).convert_dtypes()
ptb_ag.info()
ptb_ag.sort_values(by='score_val', ascending=False, inplace=True)
ptb_ag.head()
orig_cols = list(ptb_ag.columns)
new_cols = []
for i, col in enumerate(orig_cols):
col = col.lower()
if 'unnamed' in col:
new_cols.app... | _____no_output_____ | Apache-2.0 | notebooks/colab/automl-baseline/process_autogluon_results.ipynb | pszemraj/ml4hc-s22-project01 |
Repeatable splitting In this notebook, we will explore the impact of different ways of creating machine learning datasets.Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation is difficult. In other words, you will f... | import google.datalab.bigquery as bq | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Create a simple machine learning model The dataset that we will use is a BigQuery public dataset of airline arrival data. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is 70 million, and then switch to the Preview tab to look at a few rows.We want to pr... | compute_alpha = """
#standardSQL
SELECT
SAFE_DIVIDE(SUM(arrival_delay * departure_delay), SUM(departure_delay * departure_delay)) AS alpha
FROM
(
SELECT RAND() AS splitfield,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN' AND arrival_a... | 0.975701430281
| Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
What is wrong with calculating RMSE on the training and test data as follows? | compute_rmse = """
#standardSQL
SELECT
dataset,
SQRT(AVG((arrival_delay - ALPHA * departure_delay)*(arrival_delay - ALPHA * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery... | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Hint:* Are you really getting the same training data in the compute_rmse query as in the compute_alpha query?* Do you get the same answers each time you rerun the compute_alpha and compute_rmse blocks? How do we correctly train and evaluate? Here's the right way to compute the RMSE using the actual training and held-o... | train_and_eval_rand = """
#standardSQL
WITH
alldata AS (
SELECT
IF (RAND() < 0.8,
'train',
'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' ),
training AS (
S... | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Using HASH of date to split the data Let's split by date and train. | compute_alpha = """
#standardSQL
SELECT
SAFE_DIVIDE(SUM(arrival_delay * departure_delay), SUM(departure_delay * departure_delay)) AS alpha
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN' AND arrival_airport = 'LAX'
AND MOD(ABS(FARM_FINGERPRINT(date)), 10) < 8
"""
results = ... | 0.975803914362
| Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha. | compute_rmse = """
#standardSQL
SELECT
IF(MOD(ABS(FARM_FINGERPRINT(date)), 10) < 8, 'train', 'eval') AS dataset,
SQRT(AVG((arrival_delay - ALPHA * departure_delay)*(arrival_delay - ALPHA * departure_delay))) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE... | dataset rmse num_flights
0 eval 12.764685 15671
1 train 13.160712 64018
| Apache-2.0 | courses/machine_learning/deepdive/02_generalization/repeatable_splitting.ipynb | AmirQureshi/code-to-run- |
Reading Survey Data(Sanna Tyrvainen 2021)Code to read the soft CIFAR-10 survey resultssurvey_answers = a pickle file with a list of arrays of survey results and original CIFAR-10 labels data_batch_1 = a pickle file of CIFAR-10 1/5 training dataset with a dictionary of * b'batch_label', = 'training... |
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pickle
import torch
def unpickle(file):
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
def imagshow(img):
plt.imshow(np.transpose(img, (1, 2, 0)))
plt.show()
labels = unpickl... | Example:
survey answer: ([0, 0, 0, 0, 2, 0, 0, 4, 0, 0], 7)
| MIT | data/read_data.ipynb | sannatti/softcifar |
Import the necessary imports | from __future__ import print_function, division, absolute_import
import tensorflow as tf
from tensorflow.contrib import keras
import numpy as np
import os
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
import itertools
import cPickle #python 2.x
#import _pickle as cPickle #python 3.x
i... | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Now read the data | with h5py.File("NS_LP_DS.h5", "r") as hf:
LFP_features_train = hf["LFP_features_train"][...]
targets_train = hf["targets_train"][...]
speeds_train = hf["speeds_train"][...]
LFP_features_eval = hf["LFP_features_eval"][...]
targets_eval = hf["targets_eval"][...]
speeds_eval = hf["speeds_eval"][...] | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
And make sure it looks ok | rand_sample = np.random.randint(LFP_features_eval.shape[0])
for i in range(LFP_features_train.shape[-1]):
plt.figure(figsize=(20,7))
plt_data = LFP_features_eval[rand_sample,:,i]
plt.plot(np.arange(-0.5, 0., 0.5/plt_data.shape[0]), plt_data)
plt.xlable("time")
plt.title(str(i)) | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Now we write some helper functions to easily select regions. | block = np.array([[2,4,6,8],[1,3,5,7]])
channels = np.concatenate([(block + i*8) for i in range(180)][::-1])
brain_regions = {'Parietal Cortex': 8000, 'Hypocampus CA1': 6230, 'Hypocampus DG': 5760, 'Thalamus LPMR': 4450,
'Thalamus Posterior': 3500, 'Thalamus VPM': 1930, 'SubThalamic': 1050}
brain_regio... | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Create a call back to save the best validation accuracy | model_chk_path = 'my_model.hdf5'
mcp = keras.callbacks.ModelCheckpoint(model_chk_path, monitor="val_acc",
save_best_only=True) | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Below I have defined a couple of different network architectures to play with. | # try:
# model = None
# except NameError:
# pass
# decay = 1e-3
# conv1d = keras.layers.Convolution1D
# maxPool = keras.layers.MaxPool1D
# model = keras.models.Sequential()
# model.add(conv1d(64, 5, padding='same', strides=2, activation='relu',
# kernel_regularizer=keras.regularizers.l2(decay),
# ... | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Helper function for the confusion matrix | def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normaliz... | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
Train and evaluation accuracies | acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.figure(figsize=(20,7))
plt.plot(epochs, acc, 'bo', label='Training')
plt.plot(epochs, val_acc, 'b', label='Validation')
plt.title('Training and validation ... | _____no_output_____ | MIT | Neuroseeker_Analysis.ipynb | atabakd/start_brain |
IDS Instruction: Regression(Lisa Mannel) Simple linear regression First we import the packages necessary fo this instruction: | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, mean_absolute_error | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Consider the data set "df" with feature variables "x" and "y" given below. | df1 = pd.DataFrame({'x': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'y': [1, 3, 2, 5, 7, 8, 8, 9, 10, 12]})
print(df1) | x y
0 0 1
1 1 3
2 2 2
3 3 5
4 4 7
5 5 8
6 6 8
7 7 9
8 8 10
9 9 12
| MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
To get a first impression of the given data, let's have a look at its scatter plot: | plt.scatter(df1.x, df1.y, color = "y", marker = "o", s = 40)
plt.xlabel('x')
plt.ylabel('y')
plt.title('first overview of the data')
plt.show() | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
We can already see a linear correlation between x and y. Assume the feature x to be descriptive, while y is our target feature. We want a linear function, y=ax+b, that predicts y as accurately as possible based on x. To achieve this goal we use linear regression from the sklearn package. | #define the set of descriptive features (in this case only 'x' is in that set) and the target feature (in this case 'y')
descriptiveFeatures1=df1[['x']]
print(descriptiveFeatures1)
targetFeature1=df1['y']
#define the classifier
classifier = LinearRegression()
#train the classifier
model1 = classifier.fit(descriptiveFe... | x
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
| MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Now we can use the classifier to predict y. We print the predictions as well as the coefficient and bias (*intercept*) of the linear function. | #use the classifier to make prediction
targetFeature1_predict = classifier.predict(descriptiveFeatures1)
print(targetFeature1_predict)
#print coefficient and intercept
print('Coefficients: \n', classifier.coef_)
print('Intercept: \n', classifier.intercept_) | [ 1.23636364 2.40606061 3.57575758 4.74545455 5.91515152 7.08484848
8.25454545 9.42424242 10.59393939 11.76363636]
Coefficients:
[1.16969697]
Intercept:
1.2363636363636399
| MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Let's visualize our regression function with the scatterplot showing the original data set. Herefore, we use the predicted values. | #visualize data points
plt.scatter(df1.x, df1.y, color = "y", marker = "o", s = 40)
#visualize regression function
plt.plot(descriptiveFeatures1, targetFeature1_predict, color = "g")
plt.xlabel('x')
plt.ylabel('y')
plt.title('the data and the regression function')
plt.show() | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Now it is your turn. Build a simple linear regression for the data below. Use col1 as descriptive feature and col2 as target feature. Also plot your results. | df2 = pd.DataFrame({'col1': [770, 677, 428, 410, 371, 504, 1136, 695, 551, 550], 'col2': [54, 47, 28, 38, 29, 38, 80, 52, 45, 40]})
#Your turn
# features that we use for the prediction are called the "descriptive" features
descriptiveFeatures2=df2[['col1']]
# the feature we would like to predict is called target fueatu... | _____no_output_____ | MIT | Instruction4/Instruction4-RegressionSVM.ipynb | danikhani/ITDS-Instructions-WS20 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.