markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv' apple = pd.read_csv(url) apple.head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 4. Check out the type of the columns
apple.dtypes
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 5. Transform the Date column as a datetime type
apple.Date = pd.to_datetime(apple.Date) apple['Date'].head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 6. Set the date as the index
apple = apple.set_index('Date') apple.head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 7. Is there any duplicate dates?
# NO! All are unique apple.index.is_unique
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
apple.sort_index(ascending = True).head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 9. Get the last business day of each month
apple_month = apple.resample('BM').mean() apple_month.head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 10. What is the difference in days between the first day and the oldest
(apple.index.max() - apple.index.min()).days
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 11. How many months in the data we have?
apple_months = apple.resample('BM').mean() len(apple_months.index)
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
# makes the plot and assign it to a variable appl_open = apple['Adj Close'].plot(title = "Apple Stock") # changes the size of the graph fig = appl_open.get_figure() fig.set_size_inches(13.5, 9)
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
**Colab RDP** : Remote Desktop to Colab InstanceUsed Google Remote Desktop & Ngrok Tunnel> **Warning : Not for Cryptocurrency Mining** >**Why are hardware resources such as T4 GPUs not available to me?** The best available hardware is prioritized for users who use Colaboratory interactively rather than for long-runni...
#@title **Create User** #@markdown Enter Username and Password username = "user" #@param {type:"string"} password = "root" #@param {type:"string"} print("Creating User and Setting it up") # Creation of user ! sudo useradd -m $username &> /dev/null # Add user to sudo group ! sudo adduser $username sudo &> /dev/null ...
_____no_output_____
MIT
Colab RDP/Colab RDP.ipynb
Apon77/Colab-Hacks
Dev Original method
# you need to download these from cellphonedb website / github and replace the path accordingly dat = 'C:/Users/Stefan/Downloads/cellphonedb_example_data/example_data/' metafile = dat+'test_meta.txt' countfile = dat+'test_counts.txt' statistical_analysis(meta_filename=metafile, counts_filename=countfile) pd.read_csv('...
_____no_output_____
MIT
scanpy_cellphonedb.ipynb
stefanpeidli/cellphonedb
scanpy API test official cellphonedb example data
# you need to download these from cellphonedb website / github and replace the path accordingly dat = 'C:/Users/Stefan/Downloads/cellphonedb_example_data/example_data/' metafile = dat+'test_meta.txt' countfile = dat+'test_counts.txt' bdata=sc.AnnData(pd.read_csv(countfile, sep='\t',index_col=0).values.T, obs=pd.read_cs...
_____no_output_____
MIT
scanpy_cellphonedb.ipynb
stefanpeidli/cellphonedb
crop xml manually change the line and sample values in the xml to match (n_lines, n_samples)
eis_xml = expatbuilder.parse(eis_xml_filename, False) eis_dom = eis_xml.getElementsByTagName("File_Area_Observational").item(0) dom_lines = eis_dom.getElementsByTagName("Axis_Array").item(0) dom_samples = eis_dom.getElementsByTagName("Axis_Array").item(1) dom_lines = dom_lines.getElementsByTagName("elements")[0] dom_...
_____no_output_____
CC0-1.0
isis/notebooks/crop_eis.ipynb
gknorman/ISIS3
crop image
dn_size_bytes = 4 # number of bytes per DN n_lines = 60 # how many to crop to n_samples = 3 start_line = 1200 # point to start crop from start_sample = 1200 image_offset = (start_line*total_samples + start_sample) * dn_size_bytes line_length = n_samples * dn_size_bytes buffer_size = n_lines * total_samples * dn...
_____no_output_____
CC0-1.0
isis/notebooks/crop_eis.ipynb
gknorman/ISIS3
crop times csv table
import pandas as pd # assumes csv file has the same filename with _times appended eis_csv_fn = image_fn + "_times.csv" df1 = pd.read_csv(eis_csv_fn) df1 x = np.array(df1) y = x[:n_lines, :] df = pd.DataFrame(y) df crop = "_cropped" csv_fn, csv_ext = os.path.splitext(eis_csv_fn) crop_csv_fn = csv_fn + crop + csv_ext cr...
_____no_output_____
CC0-1.0
isis/notebooks/crop_eis.ipynb
gknorman/ISIS3
Cryptocurrency Clusters
%matplotlib inline #import dependencies from pathlib import Path import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.cluster import KMeans
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Data Preparation
#read data in using pandas df = pd.read_csv('Resources/crypto_data.csv') df.head(10) df.dtypes #Discard all cryptocurrencies that are not being traded.In other words, filter for currencies that are currently being traded. myfilter = (df['IsTrading'] == True) trading_df = df.loc[myfilter] trading_df = trading_df.drop(...
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Examine the number of rows and columns of your dataset now. How did they change? There were 71 unique algorithms and 25 unique prooftypes so now we have 98 features in the dataset which is quite large.
#Standardize your dataset so that columns that contain larger values do not unduly influence the outcome. scaled_data = StandardScaler().fit_transform(final_df) scaled_data
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Dimensionality Reduction Creating dummy variables above dramatically increased the number of features in your dataset. Perform dimensionality reduction with PCA. Rather than specify the number of principal components when you instantiate the PCA model, it is possible to state the desired explained variance. For this ...
# Applying PCA to reduce dimensions # Initialize PCA model pca = PCA(.90) # Get two principal components for the iris data. data_pca = pca.fit_transform(scaled_data) pca.explained_variance_ratio_ #df with the principal components (columns) pd.DataFrame(data_pca)
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Next, further reduce the dataset dimensions with t-SNE and visually inspect the results. In order to accomplish this task, run t-SNE on the principal components: the output of the PCA transformation. Then create a scatter plot of the t-SNE output. Observe whether there are distinct clusters or not.
# Initialize t-SNE model tsne = TSNE(learning_rate=35) # Reduce dimensions tsne_features = tsne.fit_transform(data_pca) # The dataset has 2 columns tsne_features.shape # Prepare to plot the dataset # Visualize the clusters plt.scatter(tsne_features[:,0], tsne_features[:,1]) plt.show()
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Cluster Analysis with k-Means Create an elbow plot to identify the best number of clusters.
#Use a for-loop to determine the inertia for each k between 1 through 10. #Determine, if possible, where the elbow of the plot is, and at which value of k it appears. inertia = [] k = list(range(1, 11)) for i in k: km = KMeans(n_clusters=i) km.fit(data_pca) inertia.append(km.inertia_) # Define a Dat...
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Our best model - Catboost with learning rate of 0.7 and 180 iterations. Was trained on 10 files of the data with similar distribution of the feature user_target_recs (among the number of rows of each feature value). We received an auc of 0.845 on the kaggle leaderboard Mount Drive
from google.colab import drive drive.mount("/content/drive")
Mounted at /content/drive
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Installations and Imports
# !pip install scikit-surprise !pip install catboost # !pip install xgboost import os import pandas as pd # import xgboost # from xgboost import XGBClassifier # import pickle import catboost from catboost import CatBoostClassifier
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Global Parameters and Methods
home_path = "/content/drive/MyDrive/RS_Kaggle_Competition" def get_train_files_paths(path): dir_paths = [ os.path.join(path, dir_name) for dir_name in os.listdir(path) if dir_name.startswith("train")] file_paths = [] for dir_path in dir_paths: curr_dir_file_paths = [ os.path.join(dir_path, file_name) for fi...
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Get Data
def get_df_of_many_files(paths_list, number_of_files): for i in range(number_of_files): path = paths_list[i] curr_df = pd.read_csv(path) if i == 0: df = curr_df else: df = pd.concat([df, curr_df]) return df sample_train_data = get_df_of_many_files(train_file_paths[-21:], 10) # sam...
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Preprocess data
train_data = sample_train_data.fillna("Unknown") val_data = sample_val_data.fillna("Unknown") train_data import gc del sample_val_data del sample_train_data gc.collect()
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Scale columns
# scale columns from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler scaling_cols= ["empiric_calibrated_recs", "empiric_clicks", "empiric_calibrated_recs", "user_recs", "user_clicks", "user_target_recs"] scaler = StandardScaler() train_data[scaling_cols] = scaler.fit_transf...
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Explore Data
sample_train_data test_data from collections import Counter user_recs_dist = test_data["user_recs"].value_counts(normalize=True) top_user_recs_count = user_recs_dist.nlargest(200) print(top_user_recs_count) percent = sum(top_user_recs_count.values) percent print(sample_train_data["user_recs"].value_counts(normalize=...
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Train the model
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn import metrics X_train = train_data.drop(columns=["is_click"], inplace=False) y_train = train_data["is_click"] X_val = val_data.drop(columns=["is_click"], i...
0: test: 0.8149026 best: 0.8149026 (0) total: 6.36s remaining: 18m 57s 10: test: 0.8461028 best: 0.8461028 (10) total: 53.6s remaining: 13m 44s 20: test: 0.8490288 best: 0.8490288 (20) total: 1m 38s remaining: 12m 26s 30: test: 0.8505695 best: 0.8505695 (30) total: 2m 23s remaining: 11m 29s 40: test: 0.8514950 best: 0....
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Submission File
test_data = pd.read_csv("/content/drive/MyDrive/RS_Kaggle_Competition/test/test_file.csv") test_data = test_data.iloc[:,:-1] test_data[scaling_cols] = scaler.transform(test_data[scaling_cols]) X_test = test_data.fillna("Unknown") X_test pred_proba = model.predict_proba(X_test) submission_dir_path = "/content/drive/M...
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Random Search Algorithms Importing Necessary Libraries
import six import sys sys.modules['sklearn.externals.six'] = six import mlrose import numpy as np import pandas as pd import seaborn as sns import mlrose_hiive import matplotlib.pyplot as plt np.random.seed(44) sns.set_style("darkgrid")
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Defining a Fitness Function Object
# Define alternative N-Queens fitness function for maximization problem def queens_max(state): # Initialize counter fitness = 0 # For all pairs of queens for i in range(len(state) - 1): for j in range(i + 1, len(state)): # Check for horizontal, diagonal-up and ...
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Defining an Optimization Problem Object
%%time # DiscreteOpt() takes integers in range 0 to max_val -1 defined at initialization number_of_queens = 16 problem = mlrose_hiive.DiscreteOpt(length = number_of_queens, fitness_fn = fitness_cust, maximize = True, max_val = number_of_queens)
CPU times: user 138 µs, sys: 79 µs, total: 217 µs Wall time: 209 µs
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 1 Simulated Annealing
%%time sa = mlrose_hiive.SARunner(problem, experiment_name="SA_Exp", iteration_list=[10000], temperature_list=[10, 50, 100, 250, 500], decay_list=[mlrose_hiive.ExpDecay, mlrose_hiive.GeomDecay], ...
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 2 Genetic Algorithm
%%time ga = mlrose_hiive.GARunner(problem=problem, experiment_name="GA_Exp", seed=44, iteration_list = [10000], max_attempts = 100, population_sizes = [100, 500], m...
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 3 MIMIC
%%time mmc = mlrose_hiive.MIMICRunner(problem=problem, experiment_name="MMC_Exp", seed=44, iteration_list=[10000], max_attempts=100, population_sizes=[100,500], kee...
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 4 Randomized Hill Climbing
%%time runner_return = mlrose_hiive.RHCRunner(problem, experiment_name="first_try", iteration_list=[10000], seed=44, max_attempts=100, restart_list=[100]) rhc_run_stats, rhc_run_curves = runner_return....
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Performance Tuning Guide***************************Author**: `Szymon Migacz `_Performance Tuning Guide is a set of optimizations and best practices which canaccelerate training and inference of deep learning models in PyTorch. Presentedtechniques often can be implemented by changing only a few lines of code and canbe a...
model.zero_grad() # or optimizer.zero_grad()
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
to zero out gradients, use the following method instead:
for param in model.parameters(): param.grad = None
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
The second code snippet does not zero the memory of each individual parameter,also the subsequent backward pass uses assignment instead of addition to storegradients, this reduces the number of memory operations.Setting gradient to ``None`` has a slightly different numerical behavior thansetting it to zero, for more de...
@torch.jit.script def fused_gelu(x): return x * 0.5 * (1.0 + torch.erf(x / 1.41421))
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
Refer to`TorchScript documentation `_for more advanced use cases. Enable channels_last memory format for computer vision models~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~PyTorch 1.5 introduced support for ``channels_last`` memory format forconvolutional networks. This format is meant to be used in con...
torch.backends.cudnn.benchmark = True
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
78. Subsets__Difficulty__: Medium[Link](https://leetcode.com/problems/subsets/)Given an integer array `nums` of unique elements, return all possible subsets (the power set).The solution set must not contain duplicate subsets. Return the solution in any order.__Example 1__:Input: `nums = [1,2,3]`Output: `[[],[1],[2],[1...
from typing import List
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
DFS Approach
class SolutionDFS: def dfs(self, res, nums): if len(nums)==0: return [res] ans = [] for i, num in enumerate(nums): # print(res+[num]) ans.extend(self.dfs(res+[num], nums[i+1:])) ans.append(res) # print(ans) return ans def ...
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
Using a bit-mask to indicate selected items from the list of numbers
class SolutionMask: def subsets(self, nums: List[int]) -> List[List[int]]: combs = [] n = len(nums) for mask in range(0, 2**n): i = 0 rem = mask current_set = [] while rem: if rem%2: current_set.appe...
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
A cleaner and efficient implementation of using bit-mask.
class SolutionMask2: def subsets(self, nums: List[int]) -> List[List[int]]: res = [] n = len(nums) nth_bit = 1<<n for i in range(2**n): # To create a bit-mask with length n bit_mask = bin(i | nth_bit)[3:] res.append([nums[j] for j in ...
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
Test cases
# Example 1 nums1 = [1,2,3] res1 = [[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]] # Example 2 nums2 = [0] res2 = [[],[0]] # Example 3 nums3 = [0, -2, 5, -7, 9] res3 = [[0,-2,5,-7,9],[0,-2,5,-7],[0,-2,5,9],[0,-2,5],[0,-2,-7,9],[0,-2,-7],[0,-2,9],[0,-2],[0,5,-7,9],[0,5,-7],[0,5,9],[0,5],[0,-7,9],[0,-7],[0,9],[0],[-2,5,-7,9...
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
**Matrix factorization** is a class of collaborative filtering algorithms used in recommender systems. **Matrix factorization** approximates a given rating matrix as a product of two lower-rank matrices.It decomposes a rating matrix R(nxm) into a product of two matrices W(nxd) and U(mxd).\begin{equation*}\mathbf{R}_{n ...
#install pyspark !pip install pyspark
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Importing the necessary libraries
#Import the necessary libraries from pyspark import SparkContext, SQLContext # required for dealing with dataframes import numpy as np from pyspark.ml.recommendation import ALS # for Matrix Factorization using ALS # instantiating spark context and SQL context
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 1. Loading the data into a PySpark dataframe
#Read the dataset into a dataframe jester_ratings_df = sqlContext.read.csv("/kaggle/input/jester-17m-jokes-ratings-dataset/jester_ratings.csv",header = True, inferSchema = True) #show the ratings jester_ratings_df.show(5) #Print the total number of ratings, unique users and unique jokes.
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 2. Splitting into train and test part
#Split the dataset using randomSplit in a 90:10 ratio #Print the training data size and the test data size #Show the train set X_train.show(5) #Show the test set
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 3. Fitting an ALS model
#Fit an ALS model with rank=5, maxIter=10 and Seed=0 # displaying the latent features for five users
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 4. Making predictions
# Pass userId and jokeId from test dataset as an argument # Join X_test and prediction dataframe and also drop the records for which no predictions are made
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 5. Evaluating the model
# Convert the columns into numpy arrays for direct and easy calculations #Also print the RMSE
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 6. Recommending jokes
# Recommend top 3 jokes for all the users with highest predicted rating
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Simple Flavor MixingIllustrate very basic neutrino flavor mixing in supernova neutrinos using the `SimpleMixing` class in ASTERIA.
import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import astropy.units as u from asteria import config, source from asteria.neutrino import Flavor from asteria.oscillation import SimpleMixing mpl.rc('font', size=16)
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Load CCSN Neutrino ModelLoad a neutrino luminosity model (see YAML documentation).
conf = config.load_config('../../data/config/test.yaml') ccsn = source.initialize(conf)
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Basic MixingSet up the mixing class, which only depends on $\theta_{12}$.See [nu-fit.org](http://www.nu-fit.org/) for current values of the PMNS mixing angles.
# Use theta_12 in degrees. # To do: explicitly use astropy units for input. mix = SimpleMixing(33.8)
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Mix the FlavorsApply the mixing and plot the resulting flux curves for the unmixed case and assuming the normal and inverted neutrino mass hierarchies.
fig, axes = plt.subplots(1, 3, figsize=(13,3.5), sharex=True, sharey=True) ax1, ax2, ax3 = axes t = np.linspace(-0.1, 1, 201) * u.s # UNMIXED for ls, flavor in zip(["-", "--", "-.", ":"], Flavor): flux = ccsn.get_flux(t, flavor) ax1.plot(t, flux, ls, lw=2, label=flavor.to_tex(), alpha=0.7) ax1.set_ti...
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Imputasi Imputasi adalah mengganti nilai/data yang hilang (missing value; NaN; blank) dengan nilai pengganti Mean
import pandas as pd import numpy as np kolom = {'col1':[2, 9, 19], 'col2':[5, np.nan, 17], 'col3':[3, 9, np.nan], 'col4':[6, 0, 9], 'col5':[np.nan, 7, np.nan]} data = pd.DataFrame(kolom) data data.fillna(data.mean())
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Arbitrary (Nilai Suka-Suka)
umur = {'umur' : [29, 43, np.nan, 25, 34, np.nan, 50]} data = pd.DataFrame(umur) data data.fillna(99)
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
End of Tail
umur = {'umur' : [29, 43, np.nan, 25, 34, np.nan, 50]} data = pd.DataFrame(umur) data #install library feature-engine pip install feature-engine #import EndTailImputer from feature_engine.imputation import EndTailImputer #buat Imputer imputer = EndTailImputer(imputation_method='gaussian', tail='right') #fit-kan impu...
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Data Kategorikal Modus
from sklearn.impute import SimpleImputer mobil = {'mobil':['Ford', 'Ford', 'Toyota', 'Honda', np.nan, 'Toyota', 'Honda', 'Toyota', np.nan, np.nan]} data = pd.DataFrame(mobil) data imp = SimpleImputer(strategy='most_frequent') imp.fit_transform(data)
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Random Sample
#import Random Sample from feature_engine.imputation import RandomSampleImputer #buat data missing value data = {'Jenis Kelamin' : ['Laki-laki', 'Perempuan', 'Laki-laki', np.nan, 'Laki-laki', 'Perempuan', 'Perempuan', np.nan, 'Laki-laki', np.nan], 'Umur' : [29, np.nan, 32, 43, 50, 22, np.nan, 52, np.nan, 17]} ...
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Chapter 8 - Applying Machine Learning To Sentiment Analysis Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [...
import pyprind import pandas as pd import os # change the `basepath` to the directory of the # unzipped movie dataset basepath = '/Users/sklee/datasets/aclImdb/' labels = {'pos': 1, 'neg': 0} pbar = pyprind.ProgBar(50000) df = pd.DataFrame() for s in ('test', 'train'): for l in ('pos', 'neg'): path = os....
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Shuffling the DataFrame:
import numpy as np np.random.seed(0) df = df.reindex(np.random.permutation(df.index)) df.head(5) df.to_csv('./movie_data.csv', index=False)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Introducing the bag-of-words model - **Vocabulary** : the collection of unique tokens (e.g. words) from the entire set of documents- Construct a feature vector from each document - Vector length = length of the vocabulary - Contains the counts of how often each token occurs in the particular document - Sparse vect...
import numpy as np from sklearn.feature_extraction.text import CountVectorizer count = CountVectorizer() docs = np.array([ 'The sun is shining', 'The weather is sweet', 'The sun is shining, the weather is sweet, and one and one is two']) bag = count.fit_transform(docs)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
print(count.vocabulary_)
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that ar...
print(bag.toarray())
[[0 1 0 1 1 0 1 0 0] [0 1 0 0 0 1 1 0 1] [2 3 2 1 1 1 2 1 1]]
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Those count values are called the **raw term frequency td(t,d)** - t: term - d: document The **n-gram** Models- 1-gram: "the", "sun", "is", "shining"- 2-gram: "the sun", "sun is", "is shining" - CountVectorizer(ngram_range=(2,2)) Assessing word relevancy via term frequency-inverse document frequency
np.set_printoptions(precision=2)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
- Frequently occurring words across multiple documents from both classes typically don't contain useful or discriminatory information. - ** Term frequency-inverse document frequency (tf-idf)** can be used to downweight those frequently occurring words in the feature vectors.$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \...
from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True) print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ] [ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56] [ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso c...
tf_is = 3 n_docs = 3 idf_is = np.log((n_docs+1) / (3+1)) tfidf_is = tf_is * (idf_is + 1) print('tf-idf of term "is" = %.2f' % tfidf_is)
tf-idf of term "is" = 3.00
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously...
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True) raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1] raw_tfidf l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2)) l2_tfidf
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Cleaning text data **Before** we build the bag-of-words model.
df.loc[112, 'review'][-1000:]
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Python regular expression library
import re def preprocessor(text): text = re.sub('<[^>]*>', '', text) emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text) text = re.sub('[\W]+', ' ', text.lower()) +\ ' '.join(emoticons).replace('-', '') return text preprocessor(df.loc[112, 'review'][-1000:]) preprocessor("</a>This :) is...
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Processing documents into tokens Word StemmingTransforming a word into its root form- Original stemming algorithm: Martin F. Porter. An algorithm for suf x stripping. Program: electronic library and information systems, 14(3):130–137, 1980)- Snowball stemmer (Porter2 or "English" stemmer)- Lancaster stemmer (Paice-H...
from nltk.stem.porter import PorterStemmer porter = PorterStemmer() def tokenizer(text): return text.split() def tokenizer_porter(text): return [porter.stem(word) for word in text.split()] tokenizer('runners like running and thus they run') tokenizer_porter('runners like running and thus they run')
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Lemmatization- thus -> thu- Tries to find canonical forms of words- Computationally expensive, little impact on text classification performance Stop-words Removal- Stop-words: extremely common words, e.g., is, and, has, like...- Removal is useful when we use raw or normalized tf, rather than tf-idf
import nltk nltk.download('stopwords') from nltk.corpus import stopwords stop = stopwords.words('english') [w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop] stop[-10:]
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Training a logistic regression model for document classification
X_train = df.loc[:25000, 'review'].values y_train = df.loc[:25000, 'sentiment'].values X_test = df.loc[25000:, 'review'].values y_test = df.loc[25000:, 'sentiment'].values from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import TfidfVectorize...
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Working with bigger data - online algorithms and out-of-core learning
import numpy as np import re from nltk.corpus import stopwords def tokenizer(text): text = re.sub('<[^>]*>', '', text) emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower()) text = re.sub('[\W]+', ' ', text.lower()) +\ ' '.join(emoticons).replace('-', '') tokenized = [w for w in t...
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Minibatch
def get_minibatch(doc_stream, size): docs, y = [], [] try: for _ in range(size): text, label = next(doc_stream) docs.append(text) y.append(label) except StopIteration: return None, None return docs, y
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
- We cannot use `CountVectorizer` since it requires holding the complete vocabulary. Likewise, `TfidfVectorizer` needs to keep all feature vectors in memory. - We can use `HashingVectorizer` instead for online training (32-bit MurmurHash3 algorithm by Austin Appleby (https://sites.google.com/site/murmurhash/)
from sklearn.feature_extraction.text import HashingVectorizer from sklearn.linear_model import SGDClassifier vect = HashingVectorizer(decode_error='ignore', n_features=2**21, preprocessor=None, tokenizer=tokenizer) clf = SGDClassifier(loss='...
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
! pip3 install git+https://github.com/extensive-nlp/ttc_nlp --quiet ! pip3 install torchmetrics --quiet from ttctext.datamodules.sst import SSTDataModule from ttctext.datasets.sst import StanfordSentimentTreeBank sst_dataset = SSTDataModule(batch_size=128) sst_dataset.setup() import pytorch_lightning as pl import torch...
_____no_output_____
MIT
09_NLP_Evaluation/ClassificationEvaluation.ipynb
satyajitghana/TSAI-DeepNLP-END2.0
MultiGroupDirectLiNGAM Import and settingsIn this example, we need to import `numpy`, `pandas`, and `graphviz` in addition to `lingam`.
import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import print_causal_directions, print_dagc, make_dot print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) np.random.seed(0)
['1.16.2', '0.24.2', '0.11.1', '1.5.4']
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Test dataWe generate two datasets consisting of 6 variables.
x3 = np.random.uniform(size=1000) x0 = 3.0*x3 + np.random.uniform(size=1000) x2 = 6.0*x3 + np.random.uniform(size=1000) x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000) x5 = 4.0*x0 + np.random.uniform(size=1000) x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000) X1 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5])....
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
We create a list variable that contains two datasets.
X_list = [X1, X2]
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Causal DiscoveryTo run causal discovery for multiple datasets, we create a `MultiGroupDirectLiNGAM` object and call the `fit` method.
model = lingam.MultiGroupDirectLiNGAM() model.fit(X_list)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Using the `causal_order_` properties, we can see the causal ordering as a result of the causal discovery.
model.causal_order_
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Also, using the `adjacency_matrix_` properties, we can see the adjacency matrix as a result of the causal discovery. As you can see from the following, DAG in each dataset is correctly estimated.
print(model.adjacency_matrices_[0]) make_dot(model.adjacency_matrices_[0]) print(model.adjacency_matrices_[1]) make_dot(model.adjacency_matrices_[1])
[[ 0. 0. 0. 3.483 0. 0. ] [ 3.516 0. 2.466 0.165 0. 0. ] [ 0. 0. 0. 6.383 0. 0. ] [ 0. 0. 0. 0. 0. 0. ] [ 8.456 0. -1.471 0. 0. 0. ] [ 4.446 0. 0. 0. 0. 0. ]]
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
To compare, we run DirectLiNGAM with single dataset concatenating two datasets.
X_all = pd.concat([X1, X2]) print(X_all.shape) model_all = lingam.DirectLiNGAM() model_all.fit(X_all) model_all.causal_order_
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
You can see that the causal structure cannot be estimated correctly for a single dataset.
make_dot(model_all.adjacency_matrix_)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Independence between error variablesTo check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.
p_values = model.get_error_independence_p_values(X_list) print(p_values[0]) print(p_values[1])
[[0. 0.545 0.908 0.285 0.525 0.728] [0.545 0. 0.84 0.814 0.086 0.297] [0.908 0.84 0. 0.032 0.328 0.026] [0.285 0.814 0.032 0. 0.904 0. ] [0.525 0.086 0.328 0.904 0. 0.237] [0.728 0.297 0.026 0. 0.237 0. ]]
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
BootstrappingIn `MultiGroupDirectLiNGAM`, bootstrap can be executed in the same way as normal `DirectLiNGAM`.
results = model.bootstrap(X_list, n_sampling=100)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Causal DirectionsThe `bootstrap` method returns a list of multiple `BootstrapResult`, so we can get the result of bootstrapping from the list. We can get the same number of results as the number of datasets, so we specify an index when we access the results. We can get the ranking of the causal directions extracted by...
cdc = results[0].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) cdc = results[1].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100)
x0 <--- x3 (100.0%) x1 <--- x0 (100.0%) x1 <--- x2 (100.0%) x2 <--- x3 (100.0%) x4 <--- x0 (100.0%) x4 <--- x2 (100.0%) x5 <--- x0 (100.0%) x1 <--- x3 (72.0%)
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Directed Acyclic GraphsAlso, using the `get_directed_acyclic_graph_counts()` method, we can get the ranking of the DAGs extracted. In the following sample code, `n_dags` option is limited to the dags of the top 3 rankings, and `min_causal_effect` option is limited to causal directions with a coefficient of 0.01 or mor...
dagc = results[0].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) dagc = results[1].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100)
DAG[0]: 59.0% x0 <--- x3 x1 <--- x0 x1 <--- x2 x1 <--- x3 x2 <--- x3 x4 <--- x0 x4 <--- x2 x5 <--- x0 DAG[1]: 17.0% x0 <--- x3 x1 <--- x0 x1 <--- x2 x2 <--- x3 x4 <--- x0 x4 <--- x2 x5 <--- x0 DAG[2]: 10.0% x0 <--- x2 x0 <--- x3 x1 <--- x0 x1 <--- x2 x1 <--- x3 x2 <--- x3 x4 <...
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
ProbabilityUsing the get_probabilities() method, we can get the probability of bootstrapping.
prob = results[0].get_probabilities(min_causal_effect=0.01) print(prob)
[[0. 0. 0.08 1. 0. 0. ] [1. 0. 1. 0.08 0. 0.05] [0. 0. 0. 1. 0. 0. ] [0. 0. 0. 0. 0. 0. ] [1. 0. 0.94 0. 0. 0.2 ] [1. 0. 0. 0. 0.01 0. ]]
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Total Causal EffectsUsing the `get_total_causal_effects()` method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
causal_effects = results[0].get_total_causal_effects(min_causal_effect=0.01) df = pd.DataFrame(causal_effects) labels = [f'x{i}' for i in range(X1.shape[1])] df['from'] = df['from'].apply(lambda x : labels[x]) df['to'] = df['to'].apply(lambda x : labels[x]) df
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
We can easily perform sorting operations with pandas.DataFrame.
df.sort_values('effect', ascending=False).head()
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
df[df['to']=='x1'].head()
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
import matplotlib.pyplot as plt import seaborn as sns sns.set() %matplotlib inline from_index = 3 to_index = 0 plt.hist(results[0].total_effects_[:, to_index, from_index])
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Bootstrap Probability of PathUsing the `get_paths()` method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array `[3, 0, 1]` shows the path from variable X3 through variable X0 ...
from_index = 3 # index of x3 to_index = 1 # index of x0 pd.DataFrame(results[0].get_paths(from_index, to_index))
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam