content stringlengths 73 1.12M | license stringclasses 3
values | path stringlengths 9 197 | repo_name stringlengths 7 106 | chain_length int64 1 144 |
|---|---|---|---|---|
<jupyter_start><jupyter_text># Popularity of Music Records<jupyter_code>import pandas as pd
import statsmodels.api as sm
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)<jupyter_output><empty_output><jupyter_text>## Problem 1.1 - Understanding the Data
How many observations (songs)... | no_license | /3_logistic_regression/popularity_of_music_records.ipynb | asgar0r/the_analytics_edge | 17 |
<jupyter_start><jupyter_text>### Read images<jupyter_code>paths = [f for f in glob.glob('/home/shahbaz/proj/size_normed_images/**/*.jpg', recursive=True)
if f.endswith('.jpg')]
train, test = train_test_split(paths, test_size = 0.2)
train, valid = train_test_split(train, test_size = 0.2)
train_labels = [extrac... | no_license | /notebooks/End-to-end-NonNN.ipynb | jay-uChicago/yoga-image-classifier | 5 |
<jupyter_start><jupyter_text># Example Plots
## Notebook with example interactions with the database for plotting data. <jupyter_code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from database.db_setup import *
import database.config as config
erd = dj.ERD(epi_schema)
erd<jupyt... | non_permissive | /visualization/gallery.ipynb | a-darcher/epiphyte_dhv | 5 |
<jupyter_start><jupyter_text>Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.# AutoML 03: Remote Execution using Batch AI
In this example we use the scikit-learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) to showc... | permissive | /automl/03b.auto-ml-remote-batchai.ipynb | mohideensandhu/MachineLearningNotebooks | 16 |
<jupyter_start><jupyter_text># Project. Life ExpectancyFor this project, the life expectancy dataset obtained from the website: https://www.kaggle.com/kumarajarshi/life-expectancy-who/version/1 was used. The data related to life expectancy and health factors were collected from the WHO website, and the corresponding ec... | permissive | /python-mising-data-imputation-regression-knn-mpl.ipynb | mophan/Python-Missing-Data-Imputation-Regression-kNN-MPL | 27 |
<jupyter_start><jupyter_text># Python tutorial #1본 페이지는 한림대학교 710231(딥러닝이해및응용) 수업에서 학생들의 Python 학습을 위해 만든 페이지입니다. ## Hello World !<jupyter_code>print('Hello World')
print('Hello World {} + {} = {}'.format(2, 3, 2+3))<jupyter_output>Hello World
Hello World 2 + 3 = 5
<jupyter_text>## Basic data types<jupyter_code>x = 3
p... | no_license | /01_Python-basic/01-HelloPython.ipynb | hoznigo/Undergrad-DeepLearning-20Fall | 17 |
<jupyter_start><jupyter_text> Big Data Systems - Assignment #2 # MongoDB (Estimated time: 4 hours)
The objective of this assignment is to introduce the use of sharding in MongoDB by studing the behavior of key and hash sharding.
We start by a guided study of the cluster configuration and the sharding process, by ... | no_license | /Lab2/Assignment02/.ipynb_checkpoints/Assignment2-MongoDB-checkpoint.ipynb | sbachlet/bigdata | 10 |
<jupyter_start><jupyter_text># 視覺化進出場策略 (Visualizing Strategies)## 看一下單一股票的進出場狀況<jupyter_code>import os
import sys
# 把我們自己寫的模組的位置,加入到模組搜尋路徑之中,不然會有 import error
module_dir = os.path.join(os.path.dirname(os.getcwd()), 'modules')
if not module_dir in sys.path:
sys.path.append(module_dir)
%matplotlib inline
import num... | no_license | /03. strategies/視覺化進出場策略.ipynb | victorgau/KHPY20180901 | 6 |
<jupyter_start><jupyter_text>
Classification with Python
In this notebook we try to practice all the classification algorithms that we have learned in this course.
We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation m... | no_license | /Classification project.ipynb | matteolippolis/my_project | 21 |
<jupyter_start><jupyter_text>Analyzing US Economic Data and Building a Dashboard
Description
Extracting essential data from a dataset and displaying it is a necessary part of data science; therefore individuals can make correct decisions based on the data. In this assignment, you will extract some essential economic... | no_license | /test_notebook_final.ipynb | pallavilanke/IBM-WATSON-STUDIO-NOTEBOOK | 24 |
<jupyter_start><jupyter_text>## Data preprocessing
##### Copyright (C) Microsoft Corporation.
see license file for details<jupyter_code># Allow multiple displays per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# AZUREML_NATIVE_SHARE_DIRECTORY mappin... | no_license | /Code/02_Model/020_visualize_class_separability.ipynb | georgeAccnt-GH/transfer_learning | 1 |
<jupyter_start><jupyter_text>#US Baby Names Data Analysis<jupyter_code>%matplotlib inline
import warnings
warnings.filterwarnings("ignore", message="axes.color_cycle is deprecated")
import numpy as np
import pandas as pd
import scipy as sp
import seaborn as sns
import sqlite3
#%%sh
!pwd
!ls -ls /kaggle/input/*/
!ls ../... | no_license | /notebooks/kuberiitb/us-baby-names-analysis-and-yearly-animations.ipynb | Sayem-Mohammad-Imtiaz/kaggle-notebooks | 2 |
<jupyter_start><jupyter_text># 資料準備<jupyter_code>import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)<jupyter_output>Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Ext... | no_license | /.ipynb_checkpoints/Tensorflow_Mnist_CNN-checkpoint.ipynb | LevineHuang/Book_Tensorflow-Keras | 14 |
<jupyter_start><jupyter_text># Some More Python<jupyter_code>import numpy as np
import pandas as pd<jupyter_output><empty_output><jupyter_text># Strings### Arithmetic with Strings<jupyter_code>s = "spam"
e = "eggs"
s + e
s + " " + e
4 * (s + " ") + e
4 * (s + " ") + s + " and\n" + e<jupyter_output><empty_output><jupyt... | no_license | /.working/Python_PartII.ipynb | acdurbin/Astro300 | 24 |
<jupyter_start><jupyter_text># fMRI data preprocessing[](https://colab.research.google.com/github/adasegroup/NEUROML2020/blob/seminar4/seminar-4/preprocessing.ipynb)fMRI scans are saved in dicom format. For scientific analysis of brain images the... | no_license | /seminar4/preprocessing.ipynb | 123rugby/NEUROML2020 | 17 |
<jupyter_start><jupyter_text># Least Squares
Natasha Watkins<jupyter_code>from scipy.linalg import norm
import numpy as np
import scipy
import matplotlib.pyplot as plt
import cmath<jupyter_output><empty_output><jupyter_text>### Problem 1<jupyter_code>def solve(A, b):
Q, R = scipy.linalg.qr(A, mode='economic')... | no_license | /Probsets/Comp/Probset3/Least squares.ipynb | natashawatkins/BootCamp2018 | 8 |
<jupyter_start><jupyter_text>Visualizing Predictions <jupyter_code>model_conv = torchvision.models.vgg16(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
print(model_conv.classifier.children)
model_conv.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
... | no_license | /FineTune_FCN_VGG.ipynb | romina72/faceDL | 1 |
<jupyter_start><jupyter_text># 1 Matrix operations
## 1.1 Create a 4*4 identity matrix<jupyter_code>#This project is designed to get familiar with python list and linear algebra
#You cannot use import any library yourself, especially numpy
A = [[1,2,3],
[2,3,3],
[1,2,5]]
B = [[1,2,3,5],
[2,3,3,5], ... | no_license | /Basic/4-linear-algebra/linear_regression_project_en.ipynb | PhyA/Machine-Learning | 10 |
<jupyter_start><jupyter_text># Lecture 2: Python Language Basics<jupyter_code>import numpy as np
np.random.seed(12345)
np.set_printoptions(precision=4, suppress=True)<jupyter_output><empty_output><jupyter_text>## Python Language Basics### Language Semantics#### Indentation, not braces
Example: Calculation of the Pytha... | permissive | /notebooks/lecture_2.ipynb | aadorian/data_curation_course | 83 |
<jupyter_start><jupyter_text># 1. Vector data preparations
This script prepares the **Paavo zip code dataset** from the Statistics of Finland
for machine learning purposes.
It reads the original shapefile, scales all the numerical values, joins some auxiliary
data and encodes one text field for machine learning purp... | no_license | /machineLearning/01_data_preparation/01_vectorDataPreparations.ipynb | VuokkoH/geocomputing | 10 |
<jupyter_start><jupyter_text># Prepare Data
Plan - Acquire - **Prepare** - Explore - Model - Deliver
**Goal:** Prepare, tidy, and clean the data so that it is ready for exploration and analysis.
**Input:** 1 or more dataframes acquired through the "acquire" step.
**Output:** 1 dataset split into 3 samples in th... | no_license | /prepare_lesson.ipynb | CurtisJohansen/classification-exercises | 10 |
<jupyter_start><jupyter_text><jupyter_code>%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from skimage import color
from skimage.io import imread
#from skimage.data import shepp_logan_phantom
from skimage.transform import radon, rescale
from google.colab import f... | no_license | /NLTV_L1.ipynb | juhosattila/nn_tdk | 3 |
<jupyter_start><jupyter_text># Q3 Crawling Chaos
## 問題URL:
http://ksnctf.sweetduet.info/q/3/unya.html## 概要
入力フォームと送信ボタンだけのシンプルなWebページが表示される。適当に入力・送信すると"No"が返ってくる。ブラウザのデベロッパツールでhtmlソースを覗いてみると"unya.html"内に以上に長い意味不明文字列が発見できる。(ᒧᆞωᆞ)=(/ᆞωᆞ/),(ᒧᆞωᆞ).ᒧうー=-!!(/ᆞωᆞ/).にゃー,(〳ᆞωᆞ)=(ᒧᆞωᆞ),(〳ᆞωᆞ).〳にゃー=- -!(ᒧᆞωᆞ).ᒧうー,(ᒧᆞωᆞ).ᒧうーー=(〳ᆞ... | no_license | /ksnctf/KSNCTF/3/Q3_Crawling_Chaos.ipynb | adshidtadka/ctf | 1 |
<jupyter_start><jupyter_text># Количество экземлпяров = 120, количество признаков = 12, из них информативных = 3, шум - случайное число в диапозоне 1-20### генерация выборк<jupyter_code>noiseRandom=random.randint(1,20)
noiseRandom
data, target, coef = datasets.make_regression(n_samples = 120, n_features = 12, n_informa... | no_license | /Linear regression/Linear regression.ipynb | IlinykhYE/Data-mining | 4 |
<jupyter_start><jupyter_text># US - Baby Names### Introduction:
We are going to use a subset of [US Baby Names](https://www.kaggle.com/kaggle/us-baby-names) from Kaggle.
In the file it will be names from 2004 until 2014
### Step 1. Import the necessary libraries<jupyter_code>import pandas as pd
import numpy as np... | no_license | /06_Stats/US_Baby_Names/Exercises.ipynb | prativadas/pandas_excercises | 12 |
<jupyter_start><jupyter_text>## 1. Introduction
Version control repositories like CVS, Subversion or Git can be a real gold mine for software developers. They contain every change to the source code including the date (the "when"), the responsible developer (the "who"), as well as little message that describes the inte... | no_license | /Python--Exploring_the_evolution_of_Linux/notebook.ipynb | nlt-python/Datacamp_Projects | 9 |
<jupyter_start><jupyter_text># Toronto - The City Of Neighborhoods The strength and vitality of the many neighbourhoods that make up Toronto, Ontario, Canada has earned the city its unofficial nickname of "the city of neighbourhoods. There are over 140 neighbourhoods officially recognized by the City of Toronto. The ai... | no_license | /capstone projects/city_of_neighbourhoods.ipynb | Cea-Learning/coursera | 15 |
<jupyter_start><jupyter_text># Importing the dataset<jupyter_code>dataset = pd.read_csv(r"C:\Users\user\Downloads\WA_Fn-UseC_-HR-Employee-Attrition.csv")
print (dataset.head)
<jupyter_output><bound method NDFrame.head of Age Attrition BusinessTravel DailyRate Department \
0 41 Yes ... | no_license | /industrialseminar.ipynb | shreshtha11/Projects | 10 |
<jupyter_start><jupyter_text># Section 4 - Building an ANN<jupyter_code>import numpy as np
import matplotlib as plt
import pandas as pd
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values<jupyter_output><empty_output><jupyter_text>### Encode categorical features<... | no_license | /Part 1 - Artificial Neural Networks.ipynb | Efthymios-Stathakis/Deep-Learning-with-ANN | 9 |
<jupyter_start><jupyter_text># EX 7<jupyter_code>from sklearn.datasets import make_moons
X, y = make_moons(n_samples=10000, noise=0.4, random_state=42)
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accurac... | permissive | /HW/6_Trees.ipynb | ss52/handson-ml2 | 6 |
<jupyter_start><jupyter_text># Linear Mixed Model Multi-Trait Example
In this tutorial we look at how to use linear mixed models to study genetic associations for multiple traits simultaneously. In some settings this can be a more powerful approach for analysis.## Setting up<jupyter_code># activate inline plotting
%ma... | no_license | /LMM_multitrait/LMM_multitrait_example.ipynb | davismcc/embl_predocs_limix_tutorial_Nov2015 | 8 |
<jupyter_start><jupyter_text># Softmax Classification (with Cross-Entropy Loss)
In this exercise you will:
- Implement a fully-vectorized **loss function** for the Softmax classifier
- Implement the fully-vectorized expression for its **analytic gradient**
- **Check your implementation** with numerical gradient
- Use... | no_license | /exercise_1/1_softmax.ipynb | Plan-T42/i2DL-Exercises | 9 |
<jupyter_start><jupyter_text># Linear Regression model### Importing libraries<jupyter_code>import pandas as pd
import pickle
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split<jupyter_output><empty_output><jupyter_text>### Loding the dataset<jupyter_code>data = pd.rea... | permissive | /src/model/model_creator.ipynb | ashbin17/mldeploy | 8 |
<jupyter_start><jupyter_text>Anatomia de un modulo
<jupyter_code>#Running
!java --module-path feeding --module zoo.animal.feeding/zoo.animal.feeding.Task
#Running (Short Form)
!java -p feeding -m zoo.animal.feeding/zoo.animal.feeding.Task
#Packaging
!jar -cvf mods/zoo.animal.feeding.jar -C feeding/ .<jupyter_output>add... | no_license | /modules/Modules.ipynb | mespinozah/LearnJava11Certification | 13 |
<jupyter_start><jupyter_text>## Modules, Methods, Constants<jupyter_code>from sklearn import svm
from sklearn.decomposition import PCA
import numpy as np
import pandas as pd
import json
import re
import random as rd
nikkud = ['ֹ', 'ְ', 'ּ', 'ׁ', 'ׂ', 'ָ', 'ֵ', 'ַ', 'ֶ', 'ִ', 'ֻ', 'ֱ', 'ֲ', 'ֳ', 'ׇ']
alphabet = ['א','... | no_license | /svm_language_classifier.ipynb | TalmudLab/talmud-word-translation | 6 |
<jupyter_start><jupyter_text># Introduction to HTML
HyperText Markup Language (HTML), is the standard markup language used to create web pages.
* Used as markup language for basically every website on the internet.
* Developed by the World Wide Web Consortium (W3C).
* Current version: HTML5 is supported by most mode... | no_license | /notebooks/web/Introduction to HTML.ipynb | UiO-INF3331/code_snippets | 9 |
<jupyter_start><jupyter_text>### Dictionaries for data science ###<jupyter_code>feature_names = ['CountryName', 'CountryCode', 'IndicatorName', 'IndicatorCode', 'Year', 'Value']
row_vals = ['Arab World', 'ARB', 'Adolescent fertility rate (births per 1,000 women ages 15-19)', 'SP.ADO.TFRT', '1960', '133.56090740552298']... | no_license | /4.Python-Data-Science-Toolbox(Part 2)/3.practice.ipynb | clghks/Data-Scientist-with-Python | 5 |
<jupyter_start><jupyter_text># Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in pract... | no_license | /assignment2/ConvolutionalNetworks.ipynb | diler5/cs231n | 18 |
<jupyter_start><jupyter_text># Detect OutliersIn our EDA in R, we determined that the ids 524 692 1183 1299 (R counts start at 1!) had very large GrLivArea, while 31 496 534 917 969 had very low SalePrice for their size, relative to the rest of the population. We want to determine what sets of points can be dropped in... | no_license | /.ipynb_checkpoints/DetectOutliers-checkpoint.ipynb | richcorrado/ART | 8 |
<jupyter_start><jupyter_text># Practical Statistics for Data Scientists (Python)
# Chapter 1. Exploratory Data Analysis
> (c) 2019 Peter C. Bruce, Andrew Bruce, Peter GedeckImport required Python packages.<jupyter_code>%matplotlib inline
from pathlib import Path
import pandas as pd
import numpy as np
from scipy.stats... | non_permissive | /practical-statistics-for-data-scientists/python/notebooks/Chapter 1 - Exploratory Data Analysis.ipynb | MargoSolo/Data_science_study | 28 |
<jupyter_start><jupyter_text># 7. Write a program to scrap all the available details of top 10 gaming laptops from digit.in. <jupyter_code>driver = webdriver.Chrome(r"E:\Aniket\chromedriver_win32\chromedriver.exe")
url="https://www.digit.in/top-products/best-gaming-laptops-40.html"
driver.get(url)
Brands=[]
Products_De... | no_license | /FlipRobo_Webscrapping_3_All_Updated.ipynb | Swarna-ashik/FlipRobo | 8 |
<jupyter_start><jupyter_text># Two Sample T-Test - Lab
## Introduction
The two-sample t-test is used to determine if two population means are equal. A common application is to test if a new process or treatment is superior to a current process or treatment.
## Objectives
You will be able to:
* Understand the t-stat... | non_permissive | /index.ipynb | dmart49/dsc-two-sample-t-tests-lab-houston-ds-060319 | 9 |
<jupyter_start><jupyter_text>### Load Amazon Data into Spark DataFrame<jupyter_code>from pyspark import SparkFiles
url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Video_Games_v1_00.tsv.gz"
spark.sparkContext.addFile(url)
df = spark.read.option("encoding", "UTF-8").csv(SparkFiles.get(""), sep="\... | no_license | /Amazon_Reviews_ETL_starter_code_RSD.ipynb | Rubysd/Amazon-Vine-Analysis | 3 |
<jupyter_start><jupyter_text>
3 different feature based approaches
1. bag of visual features:
* Create 4000 dim histogram of centroids the features are assigned to. (per descriptor)
2. BoV with spatio-temporal pyramid.
* concatenate the 6 4000 dim histograms together. split video into 2 time blocks, 3 horizio... | no_license | /examples/ucf_recognition_20/.ipynb_checkpoints/buildFVs-checkpoint.ipynb | anenbergb/seniorThesis | 2 |
<jupyter_start><jupyter_text>## OpenCV Image Processing<jupyter_code>import cv2
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
img1 = cv2.imread('DATA/dog_backpack.png')
img2 = cv2.imread('DATA/watermark_no_copy.png')<jupyter_output><empty_output><jupyter_text>When reading from cv2.imread, The im... | no_license | /Image_Proceesing_Part_I.ipynb | sagunkayastha/OpenCV_Image_Processing | 19 |
<jupyter_start><jupyter_text>
# Overview of some tools applied to COVID-19 data
The purpose of this short overview is to give you a sense of the utility of some of the tools you will study later in this course and to check that you already have (or can install) some of modules we shall use later.
In this demo, wit... | no_license | /jupyter/01_Overview_of_tools_applied_to_COVID-19_example.ipynb | gomezlis/mth271content | 13 |
<jupyter_start><jupyter_text>### Задания модуля 1#### 1) Загрузите файл mate.txt из папки data.1 способ<jupyter_code>path='../data/'
file = open(path+'mate.txt', mode='r')
file.read()<jupyter_output><empty_output><jupyter_text>2 способ<jupyter_code>file = open(path+'mate.txt', mode='r')
for line in file:
print(line... | no_license | /Task on Python-Module1.ipynb | Katty-K/task-python | 6 |
<jupyter_start><jupyter_text># **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images... | no_license | /P1-e2.ipynb | yychiang/CarND-LaneLines-P1 | 10 |
<jupyter_start><jupyter_text># Первая попытка ([2017-06-17, 11:22])
Экспериментируем с Jupyter и pandas.<jupyter_code>import pandas as pd
props = pd.read_csv("http://www.firstpythonnotebook.org/_static/committees.csv")
props.head(3)
props.info()
contribs = pd.read_csv("http://www.firstpythonnotebook.org/_static/contrib... | no_license | /prop-64-analysis.ipynb | olgapavlova/first-python-notebook | 1 |
<jupyter_start><jupyter_text>
# Sprint Challenge
## *Data Science Unit 4 Sprint 1*
After a week of Natural Language Processing, you've learned some cool new stuff: how to process text, how turn text into vectors, and how to model topics from documents. Apply your newly acquired skills to one of the most famous NLP ... | no_license | /sprint-challenge/LS_DS_415_Sprint_Challenge.ipynb | jtkernan7/DS-Unit-4-Sprint-1-NLP | 5 |
<jupyter_start><jupyter_text># Logistic and linear regression with deterministic and stochastic first order methods
Lab 2 : Optimisation - DataScience Master
Authors : Robert Gower, Alexandre Gramfort, Pierre Ablin, Mathurin Massias
The aim of this lab is to implement and compare various batch and stochast... | no_license | /lab2_moutei_soufiane_and_barrahma-tlemcani_mohammed.ipynb | soufianemoutei/Optimization-for-Data-Science | 23 |
<jupyter_start><jupyter_text># PARAMS: Data sources config<jupyter_code>INPUT_DIR = '../input/'
OUTPUT_DIR = './'
!ls -lh {INPUT_DIR}<jupyter_output><empty_output><jupyter_text># Imports<jupyter_code>%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
... | no_license | /Notebooks/py/neuronq/simplest-imaginable-decision-tree-model/simplest-imaginable-decision-tree-model.ipynb | nischalshrestha/automatic_wat_discovery | 15 |
<jupyter_start><jupyter_text>## 1. Write a Python Program to Check if a Number is Positive, Negative or Zero?<jupyter_code>def number_check(num):
if num < 0:
return 'Negative'
elif num > 0:
return 'Positive'
else:
return 'Zero'
num = float(input("Enter a Number: "))
print(f'{num} is ... | no_license | /Programming_Assingment3.ipynb | anuj-mahawar/Ineuron_Full_Stack_Data_Science | 5 |
<jupyter_start><jupyter_text>## Observations and Insights <jupyter_code># Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import scipy.stats as st
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "da... | no_license | /01-Case-Assignment/Pymaceuticals/.ipynb_checkpoints/pymaceuticals_starter-checkpoint.ipynb | citizendez/Matplotlib_Challenge | 6 |
<jupyter_start><jupyter_text>## 1. Model Creation
1.1 Define the three-compartment model above using a system of ODEs<jupyter_code>def equations(y, t, p):
"""
Define system of ODEs describing the three compartment model
:var: y = list of concentrations in all 3 compartents ([C1, C2, C3])
:params: p = l... | no_license | /Numerical Methods Practice.ipynb | annaraegeorge/pkpd-practice | 8 |
<jupyter_start><jupyter_text>#### Buid the Model<jupyter_code>model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layer... | no_license | /CIFAR10_CNN_Image_Classification.ipynb | blazingphoenix13/CIFAR10_CNN_Image_Classification | 1 |
<jupyter_start><jupyter_text>## Saving data from torchtext<jupyter_code>NGRAMS = 2
DATADIR = "./data"
BATCH_SIZE = 16
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if not os.path.isdir(DATADIR):
os.mkdir(DATADIR)
train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](
root... | no_license | /notebooks/[Step 1] Follow PyTorch Tutorial.ipynb | kangeugine/nlp-getting-started | 9 |
<jupyter_start><jupyter_text># Psedorandom number generators
While psuedorandom numbers are generated by a deterministic algorithm, we can mostly treat them as if they were true random numbers and we will drop the “pseudo” prefix. Fundamentally, the algorithm generates random integers which are then normalized to give... | no_license | /08_MonteCarlo.ipynb | chelseaphilippa/LaboratoryOfComputationalPhysics_Y3 | 9 |
<jupyter_start><jupyter_text>1)As per the table, it is observed that the charges for smoker is quite high in comparison with non-smoker.
2)The charges increases as the age of the person increases and it is comparatively higher in case of smokers.
3)The charges increase as the bmi of the person increases. Also,the char... | no_license | /insurance.ipynb | mahewashabdi/Insurance-Forecast | 5 |
<jupyter_start><jupyter_text># Preliminaries
Write requirements to file, anytime you run it, in case you have to go back and recover dependencies.
Requirements are hosted for each notebook in the companion github repo, and can be pulled down and installed here if needed. Companion github repo is located at https://git... | permissive | /Ch6/tl-for-nlp-section6-1.ipynb | sahmel/transfer-learning-for-nlp | 22 |
<jupyter_start><jupyter_text># 1. Define Concrete Dropout and Variational Dropout<jupyter_code>import torch
import torch.nn as nn
import torch.nn.functional as F
class ConcreteDropout(nn.Module):
def __init__(self, p_logit=-2.0, temp=0.01, eps=1e-8):
super(ConcreteDropout, self).__init__()
self.p_lo... | no_license | /Concrete_dropout_and_Variational_dropout.ipynb | GRE-EXAMINATION/MCDO | 4 |
<jupyter_start><jupyter_text>
Logistic Regression Table of Contents
In this lab, you will cover logistic regression by using PyTorch.
Logistic Function
Tanh
Relu
Compare Activation Functions
Estimated Time Needed: 15 min
We'll need the following libraries<jupyter_code># Import the libraries we... | no_license | /labs/4.3.1lactivationfuction_v2.ipynb | Bcopeland64/Data-Science-Notebooks | 13 |
<jupyter_start><jupyter_text>
Python | Implementation of Movie Recommender System
Recommender System is a system that seeks to predict or filter preferences according to the user’s choices. Recommender systems are utilized in a variety of areas including movies, music, news, books, research articles, search queries, s... | no_license | /Movie_recommender_system.ipynb | AbhishekGladiatorz/Movie_recommender_system | 1 |
<jupyter_start><jupyter_text># Again for Dec 2006<jupyter_code>#loading files
path = '/ocean/nsoontie/MEOPAR/SalishSea/results/storm-surges/final/dec2006/'
runs = {'all_forcing','tidesonly'}
fUs={}; fVs={}; fTs={};
for key in runs:
fUs[key] = NC.Dataset(path + key +'/SalishSea_4h_20061211_20061217_grid_U.nc','r');
... | permissive | /FigureScripts/Nov 2009 -weather compare.ipynb | ChanJeunlam/storm-surge | 4 |
<jupyter_start><jupyter_text> Word Analogies Task
- In the word analogy task, we complete the sentence "a is to b as c is to ___". An example is 'man is to woman as king is to queen'.In detail, we are trying to find
a word d,such that the associated word vectors ea,eb,ec,ed are related in the following manner: eb-ea=... | no_license | /DS_Practice/Word2Vec/Word Analogies.ipynb | The-Nightwing/DataScience | 2 |
<jupyter_start><jupyter_text># Nike Inc. (NKE) Stock Prices, Dividends and Splits<jupyter_code>## import library
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import quandl
import numpy as np
import matplotlib.pyplot as plt #for plotting
from sklearn.model_selection import train_test_split
... | no_license | /stock_market/Nike.ipynb | tanviredu/REGRESSION | 3 |
<jupyter_start><jupyter_text>## Random sampling<jupyter_code>rs = pd.read_pickle('../../Resources/random-sampling.pkl')
rs.groupby(['run', 'set', 'metric']).count()['epoch'].describe()
43617 / np.round(rs.epoch.max())
rs_rolled = rolling(rs, window=700, skip=50)
p_rs = {
'style': 'set',
'dashes': True,
'mar... | no_license | /notebooks/Plot Results.ipynb | streitlua/esa_ecodyna | 4 |
<jupyter_start><jupyter_text>Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson
Classification with Wide ResNet and CIFAR10<jupyter_code>import os
import time
import numpy as np
import pandas as pd
import matplotlib.pypl... | no_license | /FastGeometricEnsemble.ipynb | zhiheng-qian/cs4875-research-project | 1 |
<jupyter_start><jupyter_text>## Python statistics essential training - 04_04_testingStandard imports<jupyter_code>import math
import io
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as pp
%matplotlib inline
import scipy.stats
import scipy.optimize
import scipy.spatial
pumps = pd.re... | no_license | /Unit8/Exercise Files/chapter4/04_04/.ipynb_checkpoints/04_04_testing_end-checkpoint.ipynb | varsha2509/Springboard-DS | 1 |
<jupyter_start><jupyter_text>We now have a representation of a deck of cards, with each card as a string.
This is ... not ideal.Object oriented Cards
WHAT DO WE WANT THE Cards to be able to do??
* card should be able to return its own rank
* card should be able to return its own suit
* card should be able to print it... | no_license | /cards.ipynb | seanreed1111/oop-2 | 7 |
<jupyter_start><jupyter_text>_Note: heainit must already be running in the terminal you run this from if you want to make and run XSPEC scripts!_<jupyter_code>from astropy.table import Table, Column
import numpy as np
import os
import subprocess
from scipy.fftpack import fftfreq
# from scipy.stats import binned_statist... | permissive | /psd_fitting.ipynb | astrojuan/MAXIJ1535_QPO | 6 |
<jupyter_start><jupyter_text>## Generate 2D synthetic data<jupyter_code># 2D random data
x, y = make_classification(n_samples=1000,
n_features=2,
n_informative=1,
n_redundant=0,
n_clusters_per_class=1,
... | no_license | /toy_dataset.ipynb | changx03/jupyter_tensorflow | 11 |
<jupyter_start><jupyter_text># **Mental Health Prediction**
## **1. Library and data loading** ##<jupyter_code>import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
from scipy im... | no_license | /mental_health_prediction.ipynb | avurity/Mental-Health-Prediction | 28 |
<jupyter_start><jupyter_text>
*Esta libreta contiene material del Taller de Python que se lleva a cabo como parte del
evento [Data Challenge Industrial 4.0](www.lania.mx/dci). El contenido ha sido adaptado
por HTM y GED a partir del libro [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.... | no_license | /01_03_Computation_on_arrays_ufuncs.ipynb | htapiagroup/introduccion-a-numpy-DoddyRafael | 14 |
<jupyter_start><jupyter_text>### In this notebook, we display wordclouds using a stopwrod list composed of a combination from stopword dictionaries via spaCy, WordCloud, and NLTK packages.
* Three wordclouds are displayed: a full dataset wordcloud, followed by a positive sentiment only wordcloud, and a negative sentim... | no_license | /EDA/Full, Pos, Neg Wordclouds.ipynb | jasonli19/Capstone-II | 8 |
<jupyter_start><jupyter_text>
1D Numpy in PythonWelcome! This notebook will teach you about using Numpy in the Python Programming Language. By the end of this lab, you'll know what Numpy is and the Numpy operations.Table of Contents
Preparation
What is Numpy?... | no_license | /5.1-Numpy1D.ipynb | anuraj76/Python-Programming | 49 |
<jupyter_start><jupyter_text># Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your in... | permissive | /Neural Networks and Deep Learning/week2/Logistic+Regression+with+a+Neural+Network+mindset+v5.ipynb | mukul54/Coursera-Deep-Learning | 17 |
<jupyter_start><jupyter_text># Basic Text Classification with Naive Bayes
***
In the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on ... | no_license | /Naive_Bayes/Mini_Project_Naive_Bayes-EB.ipynb | echelleburns/SpringBoard | 17 |
<jupyter_start><jupyter_text># Regular Expressions
- Definition
- Match Vs Search
- Substitute substrings
- Find all
- Meta Vs Literal characters
- Various identifiers
- Back referencing example
- Exercise<jupyter_code>import re
a = "This is Learnbay class9"
mObj = re.match("Learnbay",a)
print(mObj)
mObj = re.match("Th... | no_license | /PYTHON-PAN-VITTHAL_SEP_2019/09-FILE-IO_REGEX-DONE/14_RegularExpression.ipynb | vitthalkcontact/Python | 10 |
<jupyter_start><jupyter_text># String Formatter<jupyter_code>name = 'KGB Talkie'
print('The Youtube channel is {}'.format(name))
print(f'The Youtube channel is {name}')
# Minimum width and alignment between columns
# lets say we have to colums
# day value
# 1 10
# 10 11
data_science_tuts = [('Python for beginners'... | no_license | /Working with text files.ipynb | daynoh/Natural-language-processing-using-Tensorflow-and-spacy | 5 |
<jupyter_start><jupyter_text># Dataset<jupyter_code>#To ignore warnings.
import warnings
warnings.filterwarnings("ignore")
#Load the dataset.
import pandas as pd
df=pd.read_csv("Dataset/spam.csv", encoding='latin-1')
#Remove unwanted columns.
df = df.drop(labels = ["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis = 1)
#... | no_license | /word2vec.ipynb | Susheel-1999/NLP-spam_ham_classification_using_different_techniques | 5 |
<jupyter_start><jupyter_text>## problem1<jupyter_code>#aaaaaaaaadaaa
z1<-c(0,1,rep(0,99))
#constant effect for first ten periods
z2<-c(0,rep(1,10),rep(0,90))
#gradual decrease
z3<-c(0,1,0.75,0.5,0.25,rep(0,96))
#white noise
e<-rnorm(101)
#
model1<-function(a0,a1,z,e){
#ar1 with intervention
y<-rep(0,101)
f... | no_license | /time_series/applied_econometric_time_series/chapter4_original_problem.ipynb | owari-taro/statistics | 1 |
<jupyter_start><jupyter_text># End to end 2D CNN for GTzan music classification EnvCNN
WINDOWED Version
Adapted by AL Koerich
To GTzan 3-fold
11 December 2018<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import os, sys
import soundfile as sf
from sklearn.utils import shuffle
f... | no_license | /3_GTzan_3f_CNN_2D_75w-Load-110250_frozen.ipynb | karlmiko/IJCNN2020 | 1 |
<jupyter_start><jupyter_text># Thinkful Prep Course: Unit 3.1
## Project 5: Describing Data<jupyter_code>import pandas as pd
import numpy as np
import statistics as statistics<jupyter_output><empty_output><jupyter_text>#### 1. Greg was 14, Marcia was 12, Peter was 11, Jan was 10, Bobby was 8, and Cindy was 6 when they ... | no_license | /Thinkful Prep Course_Unit 3.1_ Project 5 Drill - Describing Data.ipynb | jmniet36/Thinkful-Prep-Course-Data-Science | 4 |
<jupyter_start><jupyter_text># **INFO5731 Assignment Four**
In this assignment, you are required to conduct topic modeling, sentiment analysis based on **the dataset you created from assignment three**.# **Question 1: Topic Modeling**(30 points). This question is designed to help you develop a feel for the way topic m... | no_license | /Rubab_INFO5731_Assignment_Four.ipynb | rubabshz/Rubab_INFO5731_Spring2020 | 5 |
<jupyter_start><jupyter_text><jupyter_code>kingdoms = ['Bacteria', 'Protozoa', 'Chromista', 'Plantae', 'Fungi', 'Animalia']
kingdoms[0]
kingdoms
<jupyter_output><empty_output> | no_license | /Lab2/Chuong.ipynb | VinhPhucs/Python- | 1 |
<jupyter_start><jupyter_text>## Task 3.1<jupyter_code>%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
def arbitrary_poly(params):
poly_model = lambda x: sum([p*(x**i) for i, p in enumerate(params)])
return poly_model
# params: [theta_0, theta_1, theta_2]
true_params = ... | no_license | /Assignment 3/.ipynb_checkpoints/Assignment3 TTK4260 - martkaa-checkpoint.ipynb | martkaa/TTK4260-multimod | 1 |
<jupyter_start><jupyter_text># COGS 108 - Final Project # OverviewThis project investigates whether nearby restaurants in North Carolina share similar sanitary conditions and whether such similarities are caused by the socio-economic conditions of the restaurants' area. The underlying data did not follow any clear dist... | no_license | /final_project/FinalProject_cribeiro23.ipynb | COGS108/individual_sp20 | 24 |
<jupyter_start><jupyter_text># Image features exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the ... | no_license | /CS231N/assignment1/.ipynb_checkpoints/features-checkpoint.ipynb | 3375786734/ML_exercise | 5 |
<jupyter_start><jupyter_text># Confidence Intervals<jupyter_code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st<jupyter_output><empty_output><jupyter_text>## Challenge 1
We want to estimate the average size of the men of a country with a confidence level of 80%. Assuming... | no_license | /Labs/module_2/Confidence-Intervals/your-code/main.ipynb | sachadolle/806_Repo | 8 |
<jupyter_start><jupyter_text># 輸入與輸出(Input and Output)2019.9.19
單元學習目標:
python輸入:input(), raw_input()
python輸出: print()
python格式化輸出技術 老師說很重要 必考題## 基本輸入:讀取使用者的輸入
```
使用input時,會從標準輸入(stdin)中讀取輸入資料
這些資料是string字串型態{重要}
```
stdin==Standard Input<jupyter_code>x = input('你的名字:')
print('哈囉, ' + x)
a = input("請輸入:")
a<jupy... | no_license | /Python_1_IO.ipynb | iE019/CS4high_4080E019 | 6 |
<jupyter_start><jupyter_text>
Area Plots, Histograms, and Bar Plots## Introduction
In this lab, we will continue exploring the Matplotlib library and will learn how to create additional plots, namely area plots, histograms, and bar charts.## Table of Contents
1. [Exploring Datasets with *pandas*](#0)
2. [Download... | permissive | /Jupyter notebook/IBM/Data visualization with python/DV0101EN-Exercise-Area-Plots-Histograms-and-Bar-Charts-py.ipynb | p-s-vishnu/Documents | 32 |
<jupyter_start><jupyter_text>## Transfer Learning (Tensorflow + VGG16 + CIFAR10)
The code below performs a complete task of transfer learning. All of it was made thinking of an easy way to learn this subject and an easy way to modify it in order to resolve other tasks.
---
Forked from https://github.com/clebeson/Deep... | no_license | /transfer_learning/example.ipynb | hsneto/ufes-redes-neurais-profundas | 9 |
<jupyter_start><jupyter_text># Data Science Academy - Python Fundamentos - Capítulo 8
## Download: http://github.com/dsacademybr## Matplotlib
Para atualizar o Matplotlib abra o prompt de comando ou terminal e digite: pip install matplotlib -U## Construindo Plots<jupyter_code># O matplotlib.pyplot é uma coleção de funç... | no_license | /Cap08/Notebooks/.ipynb_checkpoints/DSA-Python-Cap08-03-Matplotlib-Plots-e-Graficos-checkpoint.ipynb | GuilhermeLis/FAD | 8 |
<jupyter_start><jupyter_text># SparkSession
https://spark.apache.org/docs/2.4.4/api/python/pyspark.html
https://spark.apache.org/docs/2.4.4/api/python/pyspark.sql.html<jupyter_code>import findspark
findspark.init()
import spark_utils
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc... | no_license | /spark-hw2.ipynb | erezKeidan/ydata_lsml | 6 |
<jupyter_start><jupyter_text>Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)# MySQL Exercise 4: Summarizing your Data
Last week you practiced retrieving and formatting selected subsets of raw data from individual tables in a database. In this lesson we are going to learn how to ... | no_license | /MySQL_Exercise_04_Summarizing_Your_Data.ipynb | vinisan/MySQLCourse | 17 |
<jupyter_start><jupyter_text># Earth Engine analysis<jupyter_code>from urllib.request import urlopen
import zipfile
import rasterio
import json
import requests
from pprint import pprint
import matplotlib.pyplot as plt
from IPython import display
import numpy as np
import pandas as pd
import folium
import os
import ee<j... | permissive | /resilience/EE_analysis.ipynb | Vizzuality/notebooks | 8 |
<jupyter_start><jupyter_text># DonorsChoose
DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
... | no_license | /.ipynb_checkpoints/DonorsChoose_LSTM-checkpoint.ipynb | chetanmedipally/Recurring-Neural-Networks_LSTM | 4 |
<jupyter_start><jupyter_text><jupyter_code>class Array2D:
def __init__(self, renglones, columnas):
self._reng = renglones
self._col = columnas
self._array = [[0 for y in range(self._col)] for x in range(self._reng)]
def clear(self, dato):
for ren in range(self._reng):
... | no_license | /JuegoDeLaVida_1358.ipynb | CarlosMelendezMejia/edd_1358_2021 | 1 |
<jupyter_start><jupyter_text># 線形回帰|重回帰分析## Wine Quality Data Set の赤ワインのデータセットを読み込み<jupyter_code>!wget https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv<jupyter_output>--2020-07-16 10:22:54-- https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red... | no_license | /MultipleLinearRegression.ipynb | koichi-inoue/JupyterNotebook | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.