code stringlengths 2.5k 6.36M | kind stringclasses 2
values | parsed_code stringlengths 0 404k | quality_prob float64 0 0.98 | learning_prob float64 0.03 1 |
|---|---|---|---|---|
# Basics of Deep Learning
In this notebook, we will cover the basics behind Deep Learning. I'm talking about building a brain....

Only kidding. Deep learning is a fascinating new field that has exploded over the last few... | github_jupyter | import numpy as np
# We will be using a sigmoid activation function
def sigmoid(x):
return 1/(1+np.exp(-x))
# derivation of sigmoid(x) - will be used for backpropagating errors through the network
def sigmoid_prime(x):
return sigmoid(x)*(1-sigmoid(x))
x = np.array([1,5])
y = 0.4
weights = np.array([-0.2,0.... | 0.678753 | 0.989712 |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import requests
import time
from config import weatherKey
from citipy import citipy
from scipy.stats import linregress
weatherAPIurl = f"http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID={weatherKey}&q="
outputPath = "./output... | github_jupyter | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import requests
import time
from config import weatherKey
from citipy import citipy
from scipy.stats import linregress
weatherAPIurl = f"http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID={weatherKey}&q="
outputPath = "./output/cit... | 0.397588 | 0.468851 |
<a href="https://colab.research.google.com/github/mghendi/feedbackclassifier/blob/main/Feedback_and_Question_Classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## CCI508 - Language Technology Project
### Name: Samuel Mwamburi Mghendi
### ... | github_jupyter | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pymysql
import os
import datetime
database = pymysql.connect (host="localhost", user = "root", passwd = "password", db = "helpdesk")
cursor1 = database.cursor()
cursor1.execute("select * from issues limit 5;")
results ... | 0.422981 | 0.861247 |
# Project : Advanced Lane Finding
The Goal of this Project
In this project, your goal is to write a software pipeline to identify the lane boundaries in a video from a front-facing camera on a car. The camera calibration images, test road images, and project videos are available in the project repository.
### The goa... | github_jupyter | #importing packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import os
import collections as clx
from moviepy.editor import VideoFileClip
from IPython.display import HTML
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
# configurations Start
cam... | 0.247351 | 0.987424 |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think throug... | github_jupyter | # Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
city_data_df = pd.read_csv("output_data/cities.csv")
city_data_df.head()
#configure gmaps
gmaps.configure(api_key=g_key)
#Heamap of humidi... | 0.367043 | 0.858896 |
<center>
<img src="images/meme.png">
</center>
# Машинное обучение
> Компьютерная программа обучается на основе опыта $E$ по отношению к некоторому классу задач $T$ и меры качества $P$, если качество решения задач из $T$, измеренное на основе $P$, улучшается с приобретением опыта $E$. (Т. М. Митчелл)
### Формулир... | github_jupyter | !conda install -c intel scikit-learn -y
import numpy
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
import warnings
warnings.simplefilter('ignore')
numpy.random.seed(7)
%matplotlib inline
iris = load_iris()
X = iris.data
Y = iris.target
print(X.shape)
random_sample = numpy.random.choice(X.s... | 0.501709 | 0.954095 |
```
import numpy as np
import matplotlib.pyplot as pl
import pickle5 as pickle
rad_ratio = 7.860 / 9.449
temp_ratio = 315 / 95
scale = rad_ratio * temp_ratio
output_dir = '/Users/tgordon/research/exomoons_jwst/JexoSim/output/'
filename = 'OOT_SNR_NIRSpec_BOTS_PRISM_Kepler-1513 b_2020_11_23_2232_57.pickle'
result = pi... | github_jupyter | import numpy as np
import matplotlib.pyplot as pl
import pickle5 as pickle
rad_ratio = 7.860 / 9.449
temp_ratio = 315 / 95
scale = rad_ratio * temp_ratio
output_dir = '/Users/tgordon/research/exomoons_jwst/JexoSim/output/'
filename = 'OOT_SNR_NIRSpec_BOTS_PRISM_Kepler-1513 b_2020_11_23_2232_57.pickle'
result = pickle... | 0.261991 | 0.404507 |
```
#VOTING
import nltk
import random
from nltk.corpus import movie_reviews
from nltk.classify import ClassifierI
from statistics import mode
from nltk.tokenize import word_tokenize
import pickle
class VoteClassifier(ClassifierI):
def __init__(self, *classifiers):
self._classifiers = classifiers
... | github_jupyter | #VOTING
import nltk
import random
from nltk.corpus import movie_reviews
from nltk.classify import ClassifierI
from statistics import mode
from nltk.tokenize import word_tokenize
import pickle
class VoteClassifier(ClassifierI):
def __init__(self, *classifiers):
self._classifiers = classifiers
def ... | 0.279238 | 0.118487 |
# Guide for Authors
```
print('Welcome to "Generating Software Tests"!')
```
This notebook compiles the most important conventions for all chapters (notebooks) of "Generating Software Tests".
## Organization of this Book
### Chapters as Notebooks
Each chapter comes in its own _Jupyter notebook_. A single noteboo... | github_jupyter | print('Welcome to "Generating Software Tests"!')
from FooFuzzer import FooFuzzer
nbstripout --install --attributes .gitattributes
import random
random.random()
import fuzzingbook_utils
from Fuzzer import fuzzer
fuzzer(100, ord('0'), 10)
class Foo:
def __init__(self):
pass
def bar(self):
p... | 0.412648 | 0.954774 |
# Getting started with the practicals
***These notebooks are best viewed in Jupyter. GitHub might not display all content of the notebook properly.***
## Goal of the practical exercises
The exercises have two goals:
1. Give you the opportunity to obtain 'hands-on' experience in implementing, training and evaluation... | github_jupyter | import numpy as np
from sklearn.datasets import load_diabetes, load_breast_cancer
diabetes = load_diabetes()
breast_cancer = load_breast_cancer()
X = diabetes.data
Y = diabetes.target[:, np.newaxis]
print(X.shape)
print(Y.shape)
# use only the fourth feature
X = diabetes.data[:, np.newaxis, 3]
print(X.shape)
# us... | 0.279337 | 0.993301 |
## Definição do *dataset*
O *dataset* utilizado será o "Electromyogram (EMG) Feature Reduction Using Mutual ComponentsAnalysis for Multifunction Prosthetic Fingers Control" [1]. Maiores informações podem ser vistas no site: https://www.rami-khushaba.com/electromyogram-emg-repository.html
De acordo com a figura s... | github_jupyter | import numpy as np
from numpy import genfromtxt
import math
from librosa import stft
from scipy.signal import stft
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
from glob import glob
# Obtendo lista dos arquivos
arquivos... | 0.319758 | 0.884139 |
1. Read in the split sequences.
2. Get the alphabets and add in a padding character (' '), a stop character ('.'), and a start character ('$').
3. Save n x L x c arrays as h5py files. X is the mature sequence. y is the signal peptide.
4. Check that saved sequences decode correctly.
5. Save n x L arrays as h5py file... | github_jupyter | import pickle
import h5py
import itertools
import numpy as np
from tools import CharacterTable
# read in data from pickle files
with open('../data/filtered_datasets/train_augmented_99.pkl', 'rb') as f:
train_99 = pickle.load(f)
with open('../data/filtered_datasets/validate_99.pkl', 'rb') as f:
validate_99 ... | 0.323487 | 0.794265 |
# Convolutional Neural Network Example
Build a convolutional neural network with TensorFlow.
This example is using TensorFlow layers API, see 'convolutional_network_raw' example
for a raw TensorFlow implementation with variables.
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Example... | github_jupyter | from __future__ import division, print_function, absolute_import
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# Training Parameters
learning_rate ... | 0.828973 | 0.987017 |
```
%load_ext autoreload
%autoreload 2
import gust # library for loading graph data
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import scipy.sparse as sp
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.distributions as dist
import time
import random
from sc... | github_jupyter | %load_ext autoreload
%autoreload 2
import gust # library for loading graph data
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import scipy.sparse as sp
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.distributions as dist
import time
import random
from scipy.... | 0.79158 | 0.604487 |
# Analyzing Portfolio Risk and Return
In this Challenge, you'll assume the role of a quantitative analyst for a FinTech investing platform. This platform aims to offer clients a one-stop online investment solution for their retirement portfolios that’s both inexpensive and high quality. (Think about [Wealthfront](http... | github_jupyter | # Import the required libraries and dependencies
import pandas as pd
from pathlib import Path
%matplotlib inline
import numpy as np
import os
#understanding where we are in the dir in order to have Path work correctly
os.getcwd()
# Import the data by reading in the CSV file and setting the DatetimeIndex
# Review ... | 0.851089 | 0.995805 |
```
# install: tqdm (progress bars)
!pip install tqdm
import torch
import torch.nn as nn
import numpy as np
from tqdm.auto import tqdm
from torch.utils.data import DataLoader, Dataset, TensorDataset
import torchvision.datasets as ds
```
## Load the data (CIFAR-10)
```
def load_cifar(datadir='./data_cache'): # will d... | github_jupyter | # install: tqdm (progress bars)
!pip install tqdm
import torch
import torch.nn as nn
import numpy as np
from tqdm.auto import tqdm
from torch.utils.data import DataLoader, Dataset, TensorDataset
import torchvision.datasets as ds
def load_cifar(datadir='./data_cache'): # will download ~400MB of data into this dir. Cha... | 0.879858 | 0.832475 |
# 선형계획법 Linear Programming
```
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as nl
import scipy.optimize as so
```
ref :
* Wikipedia [link](https://en.wikipedia.org/wiki/Linear_programming)
* Stackoverflow [link](https://stackoverflow.com/questions/62571092/)
* Tips & Tricks on Linux, Matlab... | github_jupyter | import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as nl
import scipy.optimize as so
L = 10
F = 10
F1 = 2
F2 = 3
P = 5
P1 = 2
P2 = 1
S1 = 20
S2 = 25
x1 = np.linspace(0, 2.5, 101)
x2 = np.linspace(0, 5, 101)
X1, X2 = np.meshgrid(x1, x2)
C = S1 * X1 + S2 * X2
C[X2 > (-F1 * X1 + F) / F2] = np.nan
C... | 0.305076 | 0.989265 |
```
import pandas as pd
df = pd.read_csv("Poblacion_Ocupada_Condicion_Informalidad.csv",encoding='cp1252')
```
<p> Datos obtenidos en <b> <a href="https://datos.gob.mx/busca/dataset/indicadores-estrategicos-poblacion-ocupada-por-condicion-de-informalidad">Indicadores Estratégicos/Población Ocupada por Condición De Inf... | github_jupyter | import pandas as pd
df = pd.read_csv("Poblacion_Ocupada_Condicion_Informalidad.csv",encoding='cp1252')
list(df.columns)
df
Per = df['Periodo']
pd.unique(Per)
col = list(df.columns)
nom = ["Per", "EntFed", "Sex", "Edad", "Cond", "Cantidad"]
Dict = {}
for i in range(0, len(col)):
var = list(pd.unique(df[col[i]]))
... | 0.17774 | 0.769297 |
```
reset
# IMPORT PACKAGES
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from netCDF4 import Dataset
import cartopy.crs as ccrs
import cartopy.feature as feature
import cmocean.cm
import pandas as pd
import xarray as xr
from scipy import signal
import ... | github_jupyter | reset
# IMPORT PACKAGES
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from netCDF4 import Dataset
import cartopy.crs as ccrs
import cartopy.feature as feature
import cmocean.cm
import pandas as pd
import xarray as xr
from scipy import signal
import coll... | 0.270866 | 0.259567 |
# TL;DR *OCaml from the Very Beginning* by John Whitington
Notes, examples, answers etc. from the book, and some things that I wanted to check while reading the book.
## Chapter 1
1. OCaml uses this funny `;;` for marking end of statement.
2. Single `=` is used for checking equality (`2 = 2` is true).
3. Unlike Hask... | github_jupyter | let x = 2 ;;
x + 2
let result = (let x = 6 in x * x) ;;
result
x
let square x = x * x ;;
square 2
square -2
square (-2)
let doublePlusTwo x =
let y = x + 2 in
x + y ;;
doublePlusTwo 5
let rec factorial a =
if a = 1 then 1 else a * factorial (a - 1)
factorial 5
let rec addToN n =
if n = 1 then... | 0.370225 | 0.935993 |
# Introduction to Deep Learning with PyTorch
In this notebook, you will get an introduction to [PyTorch](http://pytorch.org/), which is a framework for building and training neural networks (NN). ``PyTorch`` in a lot of ways behaves like the arrays you know and love from Numpy. These Numpy arrays, after all, are just ... | github_jupyter | # First, import PyTorch
!pip install torch==1.10.1
!pip install matplotlib==3.5.0
!pip install numpy==1.21.4
!pip install omegaconf==2.1.1
!pip install optuna==2.10.0
!pip install Pillow==9.0.0
!pip install scikit_learn==1.0.2
!pip install torchvision==0.11.2
!pip install transformers==4.15.0
# First, import PyTorch
im... | 0.708616 | 0.988949 |
```
import json
import re
import urllib
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', 50)
pd.set_option('display.max_colwidth', 100)
erasmus_plus_mobility = pd.concat([
pd.read_excel(file)
for file in [
'input/ErasmusPlus_KA1_2014_LearningMobilityOfIndividuals_Projects_Ov... | github_jupyter | import json
import re
import urllib
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', 50)
pd.set_option('display.max_colwidth', 100)
erasmus_plus_mobility = pd.concat([
pd.read_excel(file)
for file in [
'input/ErasmusPlus_KA1_2014_LearningMobilityOfIndividuals_Projects_Overvi... | 0.46952 | 0.349977 |
# 3D Spectral Image
**Suhas Somnath**
10/12/2018
**This example illustrates how a 3D spectral image would be represented in the Universal Spectroscopy and
Imaging Data (USID) schema and stored in a Hierarchical Data Format (HDF5) file, also referred to as the h5USID file.**
This document is intended as a supplement... | github_jupyter | import subprocess
import sys
import os
import matplotlib.pyplot as plt
from warnings import warn
import h5py
%matplotlib notebook
def install(package):
subprocess.call([sys.executable, "-m", "pip", "install", package])
try:
# This package is not part of anaconda and may need to be installed.
import wget... | 0.217836 | 0.948489 |
This notebook works out the expected hillslope sediment flux, topography, and soil thickness for steady state on a 4x7 grid. This provides "ground truth" values for tests.
Let the hillslope erosion rate be $E$, the flux coefficient $D$, critical gradient $S_c$, and slope gradient $S$. The regolith thickness is $H$, wi... | github_jupyter | D = 0.01
Sc = 0.8
Hstar = 0.5
E = 0.0001
P0 = 0.0002
import math
H = -Hstar * math.log(E / P0)
H
P0 * math.exp(-H / Hstar)
qs = 25 * E
qs
f = Hstar*(1.0 - math.exp(-H / Hstar))
f
import numpy as np
p = np.zeros(4)
p[0] = (f * D) / (Sc ** 2)
p[1] = 0.0
p[2] = f * D
p[3] = -qs
p
my_roots = np.roots(p)
my_roots
S... | 0.245718 | 0.988503 |
<h1>PCA Training with BotNet (02-03-2018)</h1>
```
import os
import tensorflow as tf
import numpy as np
import itertools
import matplotlib.pyplot as plt
import gc
from datetime import datetime
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxSca... | github_jupyter | import os
import tensorflow as tf
import numpy as np
import itertools
import matplotlib.pyplot as plt
import gc
from datetime import datetime
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_s... | 0.623377 | 0.711042 |
```
import os, requests
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.models import load_model, Sequential
from keras.prepro... | github_jupyter | import os, requests
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.models import load_model, Sequential
from keras.preprocess... | 0.431944 | 0.283471 |
<a href="https://colab.research.google.com/github/awikner/CHyPP/blob/master/TREND_Logistic_Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Import libraries and sklearn and skimage modules.
```
import matplotlib.pyplot as plt
import nu... | github_jupyter | import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from skimage.util import invert
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
plt.imshow(invert(X[0].res... | 0.611034 | 0.99066 |
# Organization
Data analysis projects can quickly get out of hand and learning to manage them best will come with experience.
A few suggestions:
## Project Directory - Git Repository
When starting a new project create a directory that will contain everything pertaining to that project. Initialize it as a git repos... | github_jupyter | data/
venv/
.ipynb_checkpoints/
| 0.246533 | 0.874238 |
# 成為初級資料分析師 | R 程式設計與資料科學應用
> 流程控制:`while` 迴圈
## 郭耀仁
> When you’ve given the same in-person advice 3 times, write a blog post.
>
> David Robinson
## 大綱
- 邏輯值的應用場景
- `while` 迴圈
## 邏輯值的應用場景
## 邏輯值會出現在
- 條件判斷
- **`while` 迴圈**
- 資料篩選
## 迴圈是用來解決需要反覆執行、大量手動複製貼上程式碼的任務
## 將介於 1 至 100 的偶數印出
```r
2
4
# ...
100
```
##... | github_jupyter | 2
4
# ...
100
i <- 1 # start
while (EXPR) { # stop
# do something iteratively until EXPR is evaluated as FALSE
i <- i + 1 # step
}
i <- 2
while (i <= 100) {
print(i)
i <- i + 2
}
i <- 2
even_summation <- 0
while (i <= 100) {
even_summation <- even_summation + i
i <- i + 2
}
even_summation
x <-... | 0.086859 | 0.776369 |
```
#source: https://www.kaggle.com/bhaveshsk/getting-started-with-titanic-dataset/data
#data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
#data visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#machine learning packages
from sklearn.linear_m... | github_jupyter | #source: https://www.kaggle.com/bhaveshsk/getting-started-with-titanic-dataset/data
#data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
#data visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#machine learning packages
from sklearn.linear_model... | 0.498535 | 0.427935 |
# Load Packages
```
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
```
# Load Data Points (Do not modify the following block)
```
with open('training_data.npz', 'rb') as f:
data = np.load(f)
x_list = data['x_list']
y_list = data['y_list']
x_data = data['x_data']
y_da... | github_jupyter | import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
with open('training_data.npz', 'rb') as f:
data = np.load(f)
x_list = data['x_list']
y_list = data['y_list']
x_data = data['x_data']
y_data = data['y_data']
n_data = len(x_data)
w = data['w']
original_degr... | 0.495117 | 0.921922 |
# Quantum Key Distribution
## 1. Introduction
When Alice and Bob want to communicate a secret message (such as Bob’s online banking details) over an insecure channel (such as the internet), it is essential to encrypt the message. Since cryptography is a large area and almost all of it is outside the scope of this tex... | github_jupyter | from qiskit import QuantumCircuit, Aer, transpile
from qiskit.visualization import plot_histogram, plot_bloch_multivector
from numpy.random import randint
import numpy as np
qc = QuantumCircuit(1,1)
# Alice prepares qubit in state |+>
qc.h(0)
qc.barrier()
# Alice now sends the qubit to Bob
# who measures it in the X-b... | 0.555435 | 0.990404 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/Segmentation/segmentation_snic.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a tar... | github_jupyter | # Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import gee... | 0.665845 | 0.958654 |
# Introduction to XGBoost Spark with GPU
Taxi is an example of xgboost regressor. In this notebook, we will show you how to load data, train the xgboost model and use this model to predict "fare_amount" of your taxi trip.
A few libraries are required:
1. NumPy
2. cudf jar
3. xgboost4j jar
4. xgboost4j-spark j... | github_jupyter | from ml.dmlc.xgboost4j.scala.spark import XGBoostRegressionModel, XGBoostRegressor
from ml.dmlc.xgboost4j.scala.spark.rapids import GpuDataReader
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql import SparkSession
from pyspark.sql.types import FloatType, IntegerType, StructField, StructType
from ... | 0.730001 | 0.960731 |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/Pytho... | github_jupyter | import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
import numpy as np
import pandas as pd
# Create some data
rng = np.random.RandomState(0)
x = np.linspace(0, 10, 500)
y = np.cumsum(rng.randn(500, 6), 0)
# Plot the data with Matplotlib defaults
plt.plot(x, y)
plt.legend('ABCDEF', ncol=2, loc=... | 0.669096 | 0.988906 |
# Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_de... | github_jupyter | import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import time
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
print("Hello")
# This is needed to display the images.
%ma... | 0.330579 | 0.907763 |
# SMARTS selection and depiction
## Depict molecular components selected by a particular SMARTS
This notebook focuses on selecting molecules containing fragments matching a particular SMARTS query, and then depicting the components (i.e. bonds, angles, torsions) matching that particular query.
```
import openeye.oech... | github_jupyter | import openeye.oechem as oechem
import openeye.oedepict as oedepict
from IPython.display import display
import os
from __future__ import print_function
def depictMatch(mol, match, width=500, height=200):
"""Take in an OpenEye molecule and a substructure match and display the results
with (optionally) specified ... | 0.676086 | 0.783119 |
## Summary
We face the problem of predicting tweets sentiment.
We have coded the text as Bag of Words and applied an SVM model. We have built a pipeline to check different hyperparameters using cross-validation. At the end, we have obtained a good model which achieve an AUC of **0.92**
## Data loading and cleaning
... | github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
from nltk.tokenize import TweetTokenizer
fro... | 0.545044 | 0.930205 |
# <span style="color:red">Seaborn | Part-14: FacetGrid:</span>
Welcome to another lecture on *Seaborn*! Our journey began with assigning *style* and *color* to our plots as per our requirement. Then we moved on to *visualize distribution of a dataset*, and *Linear relationships*, and further we dived into topics cover... | github_jupyter | # Importing intrinsic libraries:
import numpy as np
import pandas as pd
np.random.seed(101)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(style="whitegrid", palette="rocket")
import warnings
warnings.filterwarnings("ignore")
# Let us also get tableau colors we defined earlier:
tablea... | 0.677581 | 0.986071 |
# MIST101 Pratical 1: Introduction to Tensorflow (Basics of Tensorflow)
## What is Tensor
The central unit of data in TensorFlow is the tensor. A tensor consists of a set of primitive values shaped into an array of any number of dimensions. A tensor's rank is its number of dimensions. Here are some examples of tensor... | github_jupyter | 3 # a rank 0 tensor; this is a scalar with shape []
[1., 2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
import tensorflow as tf
node1 = tf.constant(3.0, dtype=... | 0.872836 | 0.996264 |
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [cbe61622](https://jckantor.github.io/cbe61622);
content is available [on Github](https://github.com/jckantor/cbe61622.git).*
<!--NAVIGATION-->
< [4.0 Chemical Instrumentation](https://jckantor.github.io/cbe61622/04.00-Chemical_Instrumentation.html) | [Conte... | github_jupyter | <!--NOTEBOOK_HEADER-->
*This notebook contains material from [cbe61622](https://jckantor.github.io/cbe61622);
content is available [on Github](https://github.com/jckantor/cbe61622.git).*
<!--NAVIGATION-->
< [4.0 Chemical Instrumentation](https://jckantor.github.io/cbe61622/04.00-Chemical_Instrumentation.html) | [Conte... | 0.672762 | 0.703193 |
<a href="https://colab.research.google.com/github/linked0/deep-learning/blob/master/AAMY/cifar10_cnn_my.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
'''
#Train a simple deep CNN on the CIFAR10 small images dataset.
It gets to 75% validation ... | github_jupyter | '''
#Train a simple deep CNN on the CIFAR10 small images dataset.
It gets to 75% validation accuracy in 25 epochs, and 79% after 50 epochs.
(It's still underfitting at that point, though).
'''
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import I... | 0.873701 | 0.925634 |
# Hello Image Segmentation
A very basic introduction to using segmentation models with OpenVINO.
We use the pre-trained [road-segmentation-adas-0001](https://docs.openvinotoolkit.org/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model... | github_jupyter | import cv2
import matplotlib.pyplot as plt
import numpy as np
import sys
from openvino.runtime import Core
sys.path.append("../utils")
from notebook_utils import segmentation_map_to_image
ie = Core()
model = ie.read_model(model="model/road-segmentation-adas-0001.xml")
compiled_model = ie.compile_model(model=model, d... | 0.636466 | 0.988414 |
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Gaussian Probabilities
```
#format the book
%matplotlib notebook
from __future__ import division, print_function
from book_format import load_style
load_style()
```
## Introducti... | github_jupyter | #format the book
%matplotlib notebook
from __future__ import division, print_function
from book_format import load_style
load_style()
import numpy as np
x = [1.85, 2.0, 1.7, 1.9, 1.6]
print(np.mean(x))
print(np.median(x))
X = [1.8, 2.0, 1.7, 1.9, 1.6]
Y = [2.2, 1.5, 2.3, 1.7, 1.3]
Z = [1.8, 1.8, 1.8, 1.8, 1.8]
prin... | 0.68056 | 0.992386 |
<h1>KRUSKAL'S ALGORITHM</h1>
```
import math
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from collections import defaultdict
import timeit as time
print('Kruskal\'s Algorithm For Undirected Graphs\n')
print('1. Input 1 - Undirected Graph')
print('2. Input 2 - Undirecte... | github_jupyter | import math
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from collections import defaultdict
import timeit as time
print('Kruskal\'s Algorithm For Undirected Graphs\n')
print('1. Input 1 - Undirected Graph')
print('2. Input 2 - Undirected Graph')
print('3. Input 3 - Undi... | 0.276105 | 0.632942 |
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', font_scale=1.5)
%matplotlib inline
```
In order to test a number of GradeIT features including the "bridge builder", a trip segment from San Franciso Bay Area was identified. The GPS data from the trip shows the ve... | github_jupyter | import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', font_scale=1.5)
%matplotlib inline
df = pd.read_csv('data/SF_bridge_trip_segment.csv')
df.head()
fig, ax = plt.subplots(figsize=(9,5))
df.plot(x='longitude', y='latitude',ax=ax)
plt.ylabel('latitude');
fig, ax = plt.s... | 0.537041 | 0.929376 |
## Introduction
(You can also read this article on our website, [easy-tensorFlow](http://www.easy-tensorflow.com/basics/graph-and-session))
Why do we need tensorflow? Why are people crazy about it? In a way, it is lazy computing and offers flexibility in the way you run your code. What is this thing with flexbility a... | github_jupyter | import tensorflow as tf
import tensorflow as tf
a = 2
b = 3
c = tf.add(a, b, name='Add')
print(c)
sess = tf.Session()
print(sess.run(c))
sess.close()
with tf.Session() as sess:
print(sess.run(c))
import tensorflow as tf
x = 2
y = 3
add_op = tf.add(x, y, name='Add')
mul_op = tf.multiply(x, y, name='Multiply')
po... | 0.495361 | 0.992697 |
# Example Layer 2/3 Microcircuit Simulation
```
#===============================================================================================================
# 2021 Hay lab, Krembil Centre for Neuroinformatics, Summer School. Code available for educational purposes only
#============================================... | github_jupyter | #===============================================================================================================
# 2021 Hay lab, Krembil Centre for Neuroinformatics, Summer School. Code available for educational purposes only
#=============================================================================================... | 0.246806 | 0.582521 |
```
import numpy as np
import logging
import torch
import torch.nn.functional as F
import numpy as np
from tqdm import trange
from pytorch_pretrained_bert import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel
# Load pre-trained model tokenizer (vocabulary)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
text_1 = "It wa... | github_jupyter | import numpy as np
import logging
import torch
import torch.nn.functional as F
import numpy as np
from tqdm import trange
from pytorch_pretrained_bert import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel
# Load pre-trained model tokenizer (vocabulary)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
text_1 = "It was ne... | 0.871297 | 0.707859 |
```
# matplotlib inline plotting
%matplotlib inline
# make inline plotting higher resolution
%config InlineBackend.figure_format = 'svg'
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import re
from sklearn.ensemble import RandomForestClassifier
from statsmodels.api import ... | github_jupyter | # matplotlib inline plotting
%matplotlib inline
# make inline plotting higher resolution
%config InlineBackend.figure_format = 'svg'
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import re
from sklearn.ensemble import RandomForestClassifier
from statsmodels.api import OLS
... | 0.433262 | 0.917191 |
```
import fnmatch
import os
import numpy
import pandas
import seaborn
# generate an empty dataframe
df = pandas.DataFrame(columns = ["business_id", "url", "name", "open_precovid", "open_postcovid", "address", "city", "state", "postal_code"])
# loop over all files in the directory and concatenate the scrape output file... | github_jupyter | import fnmatch
import os
import numpy
import pandas
import seaborn
# generate an empty dataframe
df = pandas.DataFrame(columns = ["business_id", "url", "name", "open_precovid", "open_postcovid", "address", "city", "state", "postal_code"])
# loop over all files in the directory and concatenate the scrape output files
fo... | 0.142441 | 0.140366 |
# Module 1. Dataset Cleaning and Analysis
이 실습에서는 MovieLens 데이터 세트에서 수집된 데이터를 기반으로, 영화 추천 모델을 작성하는 법을 안내합니다.<br/>Module 1 에서는 MovieLens 데이터 세트를 가져와 각 피처들을 확인하고 데이터 클린징 및 분석 작업을 진행합니다.
## Notebook 사용법
코드는 여러 코드 셀들로 구성됩니다. 이 페이지의 상단에 삼각형으로 된 실행 단추를 마우스로 클릭하여 각 셀을 실행하고 다음 셀로 이동할 수 있습니다. 또는 셀에서 키보드 단축키 `Shift + Enter`를 ... | github_jupyter | import boto3
import json
import numpy as np
import pandas as pd
import time
import jsonlines
import os
from datetime import datetime
import sagemaker
import time
import warnings
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdate
from botocore.exceptions import ... | 0.148448 | 0.957437 |
# Loading Image Data
So far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural network... | github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torchvision import datasets, transforms
import helper
dataset = datasets.ImageFolder('path/to/data', transform=transform)
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat... | 0.532911 | 0.990422 |
# Analyzing Simulation Trajectories
- toc: false
- branch: master
- badges: true
- comments: false
- categories: [grad school, molecular modeling, scientific computing]
Let's say you've conducted a simulation.
Everything up to that point (parametrization, initialization, actually running the simulation) will be assu... | github_jupyter | import mdtraj
traj = mdtraj.load('trajectory.xtc', top='em.gro')
traj
traj.topology.atom(0)
traj.topology.atom(0).residue
traj.topology.residue(0)
traj.topology.residue(0).atom(2)
traj.topology.atom(100).index
traj.topology.select("element N")
traj.xyz
traj.xyz.shape
traj.xyz[0].shape
traj.xyz[:, [1,2,3],:].s... | 0.446736 | 0.988668 |
```
import dpkt
import os
import struct
import numpy as np
from collections import defaultdict
from pprint import pprint
try:
from Memoizer import memoize_to_folder
memoize = memoize_to_folder("e2e_memoization")
except:
# In case Memoizer isn't present, this decorator will just do nothing
memoize = lam... | github_jupyter | import dpkt
import os
import struct
import numpy as np
from collections import defaultdict
from pprint import pprint
try:
from Memoizer import memoize_to_folder
memoize = memoize_to_folder("e2e_memoization")
except:
# In case Memoizer isn't present, this decorator will just do nothing
memoize = lambda ... | 0.218086 | 0.200499 |
In support of the World Bank's ongoing support to the CoVID response in Africa, the INFRA-SAP team has partnered with the Chief Economist of HD to analyze the preparedness of the health system to respond to CoVID, focusing on ideas around infrastructure: access to facilities, demographics, electrification, and connecti... | github_jupyter | import os, sys, importlib
import rasterio, affine, gdal
import networkx as nx
import geopandas as gpd
import pandas as pd
import numpy as np
import skimage.graph as graph
from shapely.geometry import Point, shape, box
from shapely.wkt import loads
from shapely.ops import cascaded_union
from rasterio import features
... | 0.250638 | 0.639483 |
```
from typing import Tuple, Dict, Callable, Iterator, Union, Optional, List
import os
import sys
import yaml
import numpy as np
import torch
from torch import Tensor
import gym
# To import module code.
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_pat... | github_jupyter | from typing import Tuple, Dict, Callable, Iterator, Union, Optional, List
import os
import sys
import yaml
import numpy as np
import torch
from torch import Tensor
import gym
# To import module code.
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
f... | 0.727782 | 0.433382 |
```
%reload_ext blackcellmagic
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from qflow.wavefunctions import (
JastrowMcMillian,
JastrowPade,
JastrowOrion,
SimpleGaussian,
WavefunctionProduct,
FixedWavefunction,
Dnn,
SumPooling,
)
from qflow.wavefunctions.nn.laye... | github_jupyter | %reload_ext blackcellmagic
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from qflow.wavefunctions import (
JastrowMcMillian,
JastrowPade,
JastrowOrion,
SimpleGaussian,
WavefunctionProduct,
FixedWavefunction,
Dnn,
SumPooling,
)
from qflow.wavefunctions.nn.layers i... | 0.423577 | 0.695984 |
## Deliverable 2. Create a Customer Travel Destinations Map.
```
# Dependencies and Setup
import pandas as pd
import requests
import gmaps
# Import API key
from config import g_key
# Configure gmaps API key
gmaps.configure(api_key=g_key)
# 1. Import the WeatherPy_database.csv file.
city_data_df = pd.read_csv("../We... | github_jupyter | # Dependencies and Setup
import pandas as pd
import requests
import gmaps
# Import API key
from config import g_key
# Configure gmaps API key
gmaps.configure(api_key=g_key)
# 1. Import the WeatherPy_database.csv file.
city_data_df = pd.read_csv("../Weather_Database/Weather_Database.csv")
city_data_df.head()
# 2. Pro... | 0.441191 | 0.701713 |
Lambda School Data Science
*Unit 2, Sprint 2, Module 3*
---
# Cross-Validation
- Do **cross-validation** with independent test set
- Use scikit-learn for **hyperparameter optimization**
### Setup
Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds... | github_jupyter | %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
# If you're working locally:
else:
DATA_PATH = '../da... | 0.501221 | 0.927495 |
```
import pandas as pd
import panel as pn
pn.extension()
```
The ``DataFrame`` pane renders pandas, dask and streamz ``DataFrame`` and ``Series`` types as an HTML table. If you need to edit the values of a `DataFrame` use the `DataFrame` widget instead. The Pane supports all the arguments to the `DataFrame.to_html` ... | github_jupyter | import pandas as pd
import panel as pn
pn.extension()
df = pd.util.testing.makeMixedDataFrame()
df_pane = pn.pane.DataFrame(df)
df_pane
pn.panel(df_pane.param, parameters=['bold_rows', 'index', 'header', 'max_rows', 'show_dimensions'],
widgets={'max_rows': {'start': 1, 'end': len(df), 'value': len(df)}})
... | 0.301876 | 0.954351 |
Packages instalation:
```
pip install eli5
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import... | github_jupyter | pip install eli5
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import PermutationImportance
fro... | 0.498291 | 0.797162 |
```
%%sh
pip install -q pip --upgrade
pip install -q sagemaker smdebug awscli --upgrade --user
```
## Download the Fashion-MNIST dataset
```
import os
import numpy as np
from tensorflow.keras.datasets import fashion_mnist
(x_train, y_train), (x_val, y_val) = fashion_mnist.load_data()
os.makedirs("./data", exist_ok ... | github_jupyter | %%sh
pip install -q pip --upgrade
pip install -q sagemaker smdebug awscli --upgrade --user
import os
import numpy as np
from tensorflow.keras.datasets import fashion_mnist
(x_train, y_train), (x_val, y_val) = fashion_mnist.load_data()
os.makedirs("./data", exist_ok = True)
np.savez('./data/training', image=x_train, ... | 0.454714 | 0.818918 |
# Evolutionary Path Relinking
In Evolutionary Path Relinking we **relink solutions in the elite set**
> This operation can be completed periodically (e.g. every 10 iterations) or as a post processing step when the all iterations of the algorithm are complete or a time limit has been reached.
----
## Imports
```
fr... | github_jupyter | from itertools import combinations
import numpy as np
import sys
# install metapy if running in Google Colab
if 'google.colab' in sys.modules:
!pip install meta-py
from metapy.tsp import tsp_io as io
from metapy.tsp.euclidean import gen_matrix, plot_tour
from metapy.tsp.objective import OptimisedSimpleTSPObjectiv... | 0.436622 | 0.807461 |
<a href="https://colab.research.google.com/github/gandalf1819/SF-Opioid-Crisis/blob/master/SF_drug_Random_forest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/gdrive')
import numpy as np
imp... | github_jupyter | from google.colab import drive
drive.mount('/content/gdrive')
import numpy as np
import pandas as pd
import os
d_crime = pd.read_csv("/content/gdrive/My Drive/SF dataset/Police_Department_Incident_Reports__Historical_2003_to_May_2018.csv")
d_crime.columns
np.random.seed(100)
random_d_crime=d_crime.sample(2215024)
train... | 0.404155 | 0.795102 |
```
import pandas as pd
df = pd.read_csv( '/Users/jun/Downloads/body.csv', encoding="utf_8")
# display( df )
values = df.values
```
## ウエスト分布を描く
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from scipy.optimize import curve_fit
def func(x, a, mu, sigma):
... | github_jupyter | import pandas as pd
df = pd.read_csv( '/Users/jun/Downloads/body.csv', encoding="utf_8")
# display( df )
values = df.values
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from scipy.optimize import curve_fit
def func(x, a, mu, sigma):
return a*np.exp( -(x-mu)... | 0.434461 | 0.851459 |
Universidade Federal do Rio Grande do Sul (UFRGS)
Programa de Pós-Graduação em Engenharia Civil (PPGEC)
# Project PETROBRAS (2018/00147-5):
## Attenuation of dynamic loading along mooring lines embedded in clay
---
_Prof. Marcelo M. Rocha, Dr.techn._ [(ORCID)](https://orcid.org/0000-0001-5640-1020)
Porto Ale... | github_jupyter | # Importing Python modules required for this notebook
# (this cell must be executed with "shift+enter" before any other Python cell)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Importing "pandas dataframe" with dimension exponents for scales calculation
DimData = pd.read_excel('resources/D... | 0.626238 | 0.941007 |
<center>
<img src="https://raw.githubusercontent.com/Yorko/mlcourse.ai/master/img/ods_stickers.jpg" />
## [mlcourse.ai](https://mlcourse.ai) - Open Machine Learning Course
<center>
Auteur: [Yury Kashnitskiy](https://yorko.github.io). Traduit par Anna Larionova et [Ousmane Cissé](https://fr.linkedin.com/in/ousmane... | github_jupyter | import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set() # just to use the seaborn theme
from sklearn.datasets import load_boston
from sklearn.linear_model import Lasso, LassoCV, Ridge, RidgeCV
from sklearn.model_sele... | 0.764716 | 0.984094 |
# The Physics of Sound, Part I
[return to main page](index.ipynb)
## Preparations
For this exercise we need the [Sound Field Synthesis Toolbox for Python](http://python.sfstoolbox.org);
```
import sfs
```
And some other stuff:
```
# remove "inline" to get a separate plotting window:
%matplotlib inline
import ma... | github_jupyter | import sfs
# remove "inline" to get a separate plotting window:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy.core.umath_tests import inner1d
grid = sfs.util.xyz_grid([-2, 2], [-2, 2], 0, spacing=0.01)
### create 10000 randomly distributed particles
particles = [np.random.uniform... | 0.577019 | 0.995291 |
# GCP Dataflow Component Sample
A Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner.
## Intended use
Use this component to run a Python Beam code to submit a Cloud Dataflow job as... | github_jupyter | component_op(...)
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
!python3 -m pip install 'kfp>=0.1.31' --quiet
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubu... | 0.342572 | 0.951278 |
```
!pip install gluoncv # -i https://opentuna.cn/pypi/web/simple
%matplotlib inline
```
4. Transfer Learning with Your Own Image Dataset
=======================================================
Dataset size is a big factor in the performance of deep learning models.
``ImageNet`` has over one million labeled images, b... | github_jupyter | !pip install gluoncv # -i https://opentuna.cn/pypi/web/simple
%matplotlib inline
import mxnet as mx
import numpy as np
import os, time, shutil
from mxnet import gluon, image, init, nd
from mxnet import autograd as ag
from mxnet.gluon import nn
from mxnet.gluon.data.vision import transforms
from gluoncv.utils import m... | 0.679179 | 0.925769 |
## Diffusion Tensor Imaging (DTI)
Diffusion tensor imaging or "DTI" refers to images describing diffusion with a tensor model. DTI is derived from preprocessed diffusion weighted imaging (DWI) data. First proposed by Basser and colleagues ([Basser, 1994](https://www.ncbi.nlm.nih.gov/pubmed/8130344)), the diffusion ten... | github_jupyter | import bids
from bids.layout import BIDSLayout
from dipy.io.gradients import read_bvals_bvecs
from dipy.core.gradients import gradient_table
from nilearn import image as img
import nibabel as nib
bids.config.set_option('extension_initial_dot', True)
deriv_layout = BIDSLayout("../../../data/ds000221/derivatives", vali... | 0.68616 | 0.99153 |
**Introduction and Workspace setting**
We collected a valueble dataset just before the election from random street interviews in kaduwela Colombo area in Sri Lanka in order to predict the winnning presidential election candidate of Sri Lanka in 2019 polls and collected people's rationale behind their decision and try ... | github_jupyter | library(tidyverse) # metapackage with lots of helpful functions
list.files(path = "../input/srilankanpresidentialelectionprediction2019")
roadInterviewData <- read.csv(file="../input/srilankanpresidentialelectionprediction2019/face_to_face_road_interviews.csv", header=TRUE, sep=",")
head(roadInterviewData)
summary(roa... | 0.319227 | 0.948489 |
```
import pandas as pd
import numpy as np
import statsmodels.api as sm
from scipy.stats import norm
# Getting the database
df_data = pd.read_excel('proshares_analysis_data.xlsx', header=0, index_col=0, sheet_name='merrill_factors')
df_data.head()
```
# Section 1 - Short answer
1.1 Mean-variance optimization goes lon... | github_jupyter | import pandas as pd
import numpy as np
import statsmodels.api as sm
from scipy.stats import norm
# Getting the database
df_data = pd.read_excel('proshares_analysis_data.xlsx', header=0, index_col=0, sheet_name='merrill_factors')
df_data.head()
# 2.1 What are the weights of the tangency portfolio, wtan?
rf_lab = 'USGG3... | 0.777638 | 0.904777 |
# Curso de introducción al análisis y modelado de datos con Python
<img src="../images/cacheme.png" alt="logo" style="width: 150px;"/>
<img src="../images/aeropython_logo.png" alt="logo" style="width: 115px;"/>
---
# Pandas: Carga y manipulación básica de datos
_Hasta ahora hemos visto las diferentes estructuras p... | github_jupyter | # preserve
from IPython.display import HTML
HTML('<iframe src="https://opendata.aemet.es/centrodedescargas/inicio" width="700" height="400"></iframe>')
# en linux
#!head ../data/alicante_city_climate_aemet.csv
# en windows
# !more ..\data\alicante_city_climate_aemet.csv
# recuperar los tipos de datos de cada columna
... | 0.342791 | 0.988142 |
```
# importing required packages
from pyspark.sql import SparkSession
from pyspark.ml.feature import HashingTF, IDF, Normalizer, Word2Vec
from pyspark.ml.linalg import DenseVector, Vectors, VectorUDT
from pyspark.sql.functions import col, explode, udf, concat_ws, collect_list, split
from pyspark.ml.recommendation imp... | github_jupyter | # importing required packages
from pyspark.sql import SparkSession
from pyspark.ml.feature import HashingTF, IDF, Normalizer, Word2Vec
from pyspark.ml.linalg import DenseVector, Vectors, VectorUDT
from pyspark.sql.functions import col, explode, udf, concat_ws, collect_list, split
from pyspark.ml.recommendation import ... | 0.38318 | 0.556641 |
<a href="https://colab.research.google.com/github/TarekAzzouni/Baterries-ML-Lithium-Ions-01/blob/main/Data_Driven_model_for_HNEI_DATASET_(_Machine_learning_part).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Description of the dataset :
A batc... | github_jupyter | import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib.colors import ListedColormap
from sklearn.metrics import plot_confusion_matrix
from scipy.stats import norm, boxcox
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from collections import Counter
from scip... | 0.58166 | 0.955026 |
```
try:
import openmdao.api as om
except ImportError:
!python -m pip install openmdao[notebooks]
import openmdao.api as om
```
# BoundsEnforceLS
The BoundsEnforceLS only backtracks until variables violate their upper and lower bounds.
Here is a simple example where BoundsEnforceLS is used to backtrack d... | github_jupyter | try:
import openmdao.api as om
except ImportError:
!python -m pip install openmdao[notebooks]
import openmdao.api as om
import numpy as np
import openmdao.api as om
from openmdao.test_suite.components.implicit_newton_linesearch import ImplCompTwoStatesArrays
top = om.Problem()
top.model.add_subsystem('co... | 0.825379 | 0.8059 |
```
import pyspark
from pyspark import SparkConf
from pyspark import SparkContext, SQLContext
import pandas as pd
import seaborn as sns
# You can configure the SparkContext
conf = SparkConf()
conf.set('spark.sql.shuffle.partitions', '2100')
conf.set("spark.executor.cores", "5")
SparkContext.setSystemProperty('spark.ex... | github_jupyter | import pyspark
from pyspark import SparkConf
from pyspark import SparkContext, SQLContext
import pandas as pd
import seaborn as sns
# You can configure the SparkContext
conf = SparkConf()
conf.set('spark.sql.shuffle.partitions', '2100')
conf.set("spark.executor.cores", "5")
SparkContext.setSystemProperty('spark.execut... | 0.358465 | 0.288636 |
# Classroom exercise: energy calculation
## Diffusion model in 1D
Description: A one-dimensional diffusion model. (Could be a gas of particles, or a bunch of crowded people in a corridor, or animals in a valley habitat...)
- Agents are on a 1d axis
- Agents do not want to be where there are other agents
- This is re... | github_jupyter | %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
density = np.array([0, 0, 3, 5, 8, 4, 2, 1])
fig, ax = plt.subplots()
ax.bar(np.arange(len(density)) - 0.5, density)
ax.xrange = [-0.5, len(density) - 0.5]
ax.set_ylabel("Particle count $n_i$")
ax.set_xlabel("Position $i$")
%%bash
rm -rf diffu... | 0.724188 | 0.982305 |
```
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
from tkinter import *
import tkinter as tk
from tkinter import filedialog
root= tk.Tk()
root.resizable(0, 0)
root.title("Iris Prediction")
canva... | github_jupyter | import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
from tkinter import *
import tkinter as tk
from tkinter import filedialog
root= tk.Tk()
root.resizable(0, 0)
root.title("Iris Prediction")
canvas1 =... | 0.25488 | 0.12603 |
```
import itertools
from skyscanner import FlightsCache
service = FlightsCache('se893794935794863942245517499220')
params = dict(
market='US',
currency='USD',
locale='en-US',
destinationplace='US',
outbounddate='2016-08',
inbounddate='2016-08')
user1_params = dict(originplace='DTW-sky')
user... | github_jupyter | import itertools
from skyscanner import FlightsCache
service = FlightsCache('se893794935794863942245517499220')
params = dict(
market='US',
currency='USD',
locale='en-US',
destinationplace='US',
outbounddate='2016-08',
inbounddate='2016-08')
user1_params = dict(originplace='DTW-sky')
user2_pa... | 0.267408 | 0.092401 |
# 元组
元组不可变
---
「元组」定义语法为:(元素1, 元素2, ..., 元素n)
- 小括号把所有元素绑在一起
- 逗号将每个元素一一分开
## 1. 创建和访问一个元组
- Python 的元组与列表类似,不同之处在于tuple被创建后就不能对其进行修改,类似字符串。
- 元组使用小括号,列表使用方括号。
- 元组与列表类似,也用整数来对它进行**索引 (indexing) 和切片 (slicing)**。
```
t1 = (1, 10.31, 'python')
t2 = 1, 10.31, 'python'
print(t1, type(t1))
# (1, 10.31, 'python') <clas... | github_jupyter | t1 = (1, 10.31, 'python')
t2 = 1, 10.31, 'python'
print(t1, type(t1))
# (1, 10.31, 'python') <class 'tuple'>
print(t2, type(t2))
# (1, 10.31, 'python') <class 'tuple'>
tuple1 = (1, 2, 3, 4, 5, 6, 7, 8)
print(tuple1[1]) # 2
print(tuple1[5:]) # (6, 7, 8)
print(tuple1[:5]) # (1, 2, 3, 4, 5)
tuple2 = tuple1[:]
print(t... | 0.078212 | 0.828106 |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/8_expert_mode/2)%20Create%20experiment%20from%20scratch%20-%20Pytorch%20backend%20-%20train%2C%20validate%2C%20infer.ipynb" target="_parent"><img src="https://colab.research.google.com/asset... | github_jupyter | !git clone https://github.com/Tessellate-Imaging/monk_v1.git
# If using Colab install using the commands below
!cd monk_v1/installation/Misc && pip install -r requirements_colab.txt
# If using Kaggle uncomment the following command
#!cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt
# Select the ... | 0.483892 | 0.904017 |
<a href="https://colab.research.google.com/github/krakowiakpawel9/convnet-course/blob/master/02_mnist_cnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Trenowanie prostej sieci neuronowej na zbiorze MNIST
```
import keras
from keras.datasets i... | github_jupyter | import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import warnings
warnings.filterwarnings('ignore')
# zdefiniowanie wymiarów obrazu wejsciowego
img_rows, img_co... | 0.728941 | 0.950915 |
```
import pandas as pd
import numpy as np
from os import path
import matplotlib.pyplot as plt
import seaborn as sns
import librosa
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
import matplotlib as mpl
metadata = pd.read_csv('metadata.csv')
plt.st... | github_jupyter | import pandas as pd
import numpy as np
from os import path
import matplotlib.pyplot as plt
import seaborn as sns
import librosa
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
import matplotlib as mpl
metadata = pd.read_csv('metadata.csv')
plt.style.... | 0.124519 | 0.653085 |
# Overview of the nmrsim Top-Level API
This notebook gives a tour of the top level classes the nmrsim API provides. These are conveniences that abstract away lower-level API functions. Users wanting more control can consult the full API documentation.
```
import os
import sys
import numpy as np
import matplotlib as m... | github_jupyter | import os
import sys
import numpy as np
import matplotlib as mpl
mpl.rcParams['figure.dpi']= 300
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # makes inline plot look less blurry
home_path = os.path.abspath(os.path.join('..', '..', '..'))
if home_path not in sys.path:
sys.path.append(home_path)
... | 0.289472 | 0.98882 |
# Think Bayes: Chapter 7
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
from __future__ import print_function, division
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filter... | github_jupyter | from __future__ import print_function, division
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkbayes2
import thinkplot
### Solution
### Solution
### Solution
### Solutio... | 0.772788 | 0.983738 |
# [Метрики качества классификации](https://www.coursera.org/learn/vvedenie-mashinnoe-obuchenie/programming/vfD6M/mietriki-kachiestva-klassifikatsii)
## Введение
В задачах классификации может быть много особенностей, влияющих на подсчет качества: различные цены ошибок, несбалансированность классов и т.д. Из-за этого су... | github_jupyter | import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
data = pd.read_csv('./data/classification.csv', sep=",")
true = data['true']
pred = data['pred']
data.head()
TP = len(data[(data['pred'] == 1) & (data['true'] == 1)])
FP = len(data[(data['pred'] == 1) & (data['true'] == 0)])
FN... | 0.494385 | 0.915205 |
# Diseño de software para cómputo científico
----
## Unidad 3: Object Relational Mappers
## Python + RBMS
```python
import MySQLdb
db = MySQLdb.connect(host='localhost',user='root',
passwd='',db='Prueba')
cursor = db.cursor()
cursor.execute('Select * From usuarios')
resultado = cursor.fetchall... | github_jupyter | import MySQLdb
db = MySQLdb.connect(host='localhost',user='root',
passwd='',db='Prueba')
cursor = db.cursor()
cursor.execute('Select * From usuarios')
resultado = cursor.fetchall()
print('Datos de Usuarios')
for registro in resultado:
print(registro[0], '->', registro[1])
Datos de Usuarios
USU... | 0.174305 | 0.773302 |
# Neural Network Q Learning Part 2: Looking at what went wrong and becoming less greedy
In the previous part, we created a simple Neural Network based player and had it play against the Random Player, the Min Max Player, and the non-deterministic Min Max player. While we had some success, overall results were underwhe... | github_jupyter | move = np.argmax(probs)
if self.training is True and np.random.rand(1) < self.random_move_prob:
move = board.random_empty_spot()
else:
move = np.argmax(probs)
self.random_move_prob *= self.random_move_decrease
%matplotlib inline
import tensorflow as tf
import matplotlib.pyplot as plt
from util import evalua... | 0.387574 | 0.980581 |
<a href="https://colab.research.google.com/github/Nirzu97/pyprobml/blob/matrix-factorization/notebooks/matrix_factorization_recommender.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Matrix Factorization for Movie Lens Recommendations
This note... | github_jupyter | import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
!wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
!ls
!unzip ml-100k
folder = "ml-100k"
!wget http://files.grouplens.org/datasets/movielens/ml-1m.zip
!unzip ml-1m
!ls
folder = "ml-1m"
ratings_list = [
[int(x) for x in i.s... | 0.451568 | 0.913754 |
+ This notebook is part of lecture 31 *Change of basis and image compression* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]
+ Created by me, Dr Juan H Klopper
+ Head of Acute Care Surgery
+ Groote Schuur Hospital
+ University Cape Town
+ <a href="mailto:juan.klopper@uct.ac.za">Email me with you... | github_jupyter | from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
from sympy import init_printing, Matrix, symbols, sqrt, Rational
from warnings import filterwarnings
init_printing(use_latex = 'mathjax')
filterwarnings('ignore')
# Just look at what 512 square is
512 ** 2
A = Matrix(... | 0.391988 | 0.94366 |
# 基于注意力的神经机器翻译
此笔记本训练一个将鞑靼语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。
训练完此笔记本中的模型后,你将能够输入一个鞑靼语句子,例如 *"Әйдәгез!"*,并返回其英语翻译 *"Let's go!"*
对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。
<img src="https://tensorflow.google.cn/images/spanish-english.png" alt="spanish-... | github_jupyter | import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
May I borrow this book? ¿Puedo tomar prestado este libro?
'''
# 下载文件
path_to_zip = tf.keras.... | 0.469763 | 0.928409 |
# **¿Qué es una simulación?**
> Introducción a la materia y descripción de las herramientas computacionales que se van a utilizar a lo largo del curso.
___
### Simulación
- Es una técnica o conjunto de técnicas que ayudan a entender el comportamiento de un _sistema_ real o hipotético.
<img style="center" src="https:/... | github_jupyter | from IPython.display import YouTubeVideo
YouTubeVideo('LDZX4ooRsWs')
%run welcome.py | 0.301671 | 0.955858 |
# pyPCGA stwave inversion example
```
%matplotlib inline
```
- import relevant python packages after installing pyPCGA
- stwave.py includes python wrapper to stwave model
```
import matplotlib.pyplot as plt
from scipy.io import savemat, loadmat
import numpy as np
import stwave as st
from pyPCGA import PCGA
import m... | github_jupyter | %matplotlib inline
import matplotlib.pyplot as plt
from scipy.io import savemat, loadmat
import numpy as np
import stwave as st
from pyPCGA import PCGA
import math
import datetime as dt
N = np.array([110,83])
m = np.prod(N)
dx = np.array([5.,5.])
xmin = np.array([0. + dx[0]/2., 0. + dx[1]/2.])
xmax = np.array([110.... | 0.417865 | 0.868437 |
### **Heavy Machinery Image Recognition**
We are going to build a Machine Learning which can recognize a heavy machinery images, whether it is a truck or an excavator
```
from IPython.display import display
import os
import requests
from PIL import Image
from io import BytesIO
import numpy as np
import pandas as pd
... | github_jupyter | from IPython.display import display
import os
import requests
from PIL import Image
from io import BytesIO
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layer... | 0.652795 | 0.90261 |
# Grouping for Aggregation, Filtration, and Transformation
```
import pandas as pd
import numpy as np
pd.set_option('max_columns', 4, 'max_rows', 10, 'max_colwidth', 12)
```
## Introduction
### Defining an Aggregation
### How to do it...
```
flights = pd.read_csv('data/flights.csv')
flights.head()
(flights
.g... | github_jupyter | import pandas as pd
import numpy as np
pd.set_option('max_columns', 4, 'max_rows', 10, 'max_colwidth', 12)
flights = pd.read_csv('data/flights.csv')
flights.head()
(flights
.groupby('AIRLINE')
.agg({'ARR_DELAY':'mean'})
)
(flights
.groupby('AIRLINE')
['ARR_DELAY']
.agg('mean')
)
(flights
.... | 0.367724 | 0.743354 |
# Searchligh Analysis
* A classification problem
* Two conditions: positive / negative
### Data description
* Three subjects
* sub-01, sub-02, sub-03
* Images
* aligned in MNI space
* beta-values
* Run in ROI mask
* Left precentral gyrus in AAL atlas
```
# initialize data
data_dir = '/home/ubuntu/data/'
r... | github_jupyter | # initialize data
data_dir = '/home/ubuntu/data/'
result_dir = '/home/ubuntu/results/'
subj_list = ['sub-01', 'sub-02', 'sub-03']
num_subj = len(subj_list)
# initialize headers
import nilearn.decoding
import nilearn.image
import pandas as pd
import time
from sklearn.model_selection import KFold
!ls $data_dir
labels ... | 0.362969 | 0.782039 |
## Stable Model Training
#### NOTES:
* This is "NoGAN" based training, described in the DeOldify readme.
* This model prioritizes stable and reliable renderings. It does particularly well on portraits and landscapes. It's not as colorful as the artistic model.
```
import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
... | github_jupyter | import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
import fastai
from fastai import *
from fastai.vision import *
from fastai.callbacks.tensorboard import *
from fastai.vision.gan import *
from fasterai.generators import *
from fasterai.critics import *
from fasterai.dataset import *
from fasterai.loss import *
from fast... | 0.483648 | 0.693116 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.