code stringlengths 2.5k 6.36M | kind stringclasses 2
values | parsed_code stringlengths 0 404k | quality_prob float64 0 0.98 | learning_prob float64 0.03 1 |
|---|---|---|---|---|
<!--NAVIGATION-->
| [Contents](Index.ipynb) |
# Land Registration in Scotland workshop
This is a workshop about Land Registration. It is specifically about Land Registration in Scotland. Scotland has a long history of Land Registration.
## The General Register of Sasines - 1617
[The General Register of Sasines](ht... | github_jupyter | [~]$ conda install numpy pandas scikit-learn matplotlib seaborn jupyter | 0.499023 | 0.900705 |
# Ensemble methods. Exercises
In this section we have only two exercise:
1. Find the best three classifier in the stacking method using the classifiers from scikit-learn package.
2. Build arcing arc-x4 method.
```
%store -r data_set
%store -r labels
%store -r test_data_set
%store -r test_labels
%store -r unique_la... | github_jupyter | %store -r data_set
%store -r labels
%store -r test_data_set
%store -r test_labels
%store -r unique_labels
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree imp... | 0.542136 | 0.953013 |
```
##%overwritefile
##%file:src/cargocommand.py
##%file:../../jupyter-MyRust-kernel/jupyter_MyRust_kernel/plugins/cargocommand.py
##%noruncode
from typing import Dict, Tuple, Sequence,List
from plugins.ISpecialID import IStag,IDtag,IBtag,ITag
import os
import re
class MyCargocmd(IStag):
kobj=None
def getName(s... | github_jupyter | ##%overwritefile
##%file:src/cargocommand.py
##%file:../../jupyter-MyRust-kernel/jupyter_MyRust_kernel/plugins/cargocommand.py
##%noruncode
from typing import Dict, Tuple, Sequence,List
from plugins.ISpecialID import IStag,IDtag,IBtag,ITag
import os
import re
class MyCargocmd(IStag):
kobj=None
def getName(self)... | 0.338077 | 0.168036 |
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
```
! git clone https://github.com/data-psl/lectures2021
import sys
sys.path.append('lectures2021/notebooks/02_sklearn')
%cd 'lectu... | github_jupyter | ! git clone https://github.com/data-psl/lectures2021
import sys
sys.path.append('lectures2021/notebooks/02_sklearn')
%cd 'lectures2021/notebooks/02_sklearn'
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
np.random.seed(2)
x = np.concatenate([np.... | 0.694199 | 0.984679 |
# Roundtrip example
This notebook shows how to load a string literal tree into Roundtrip, interact with the tree, and retrieve query information based on the selection on the interactive tree.
```
import hatchet as ht
if __name__ == "__main__":
smallStr = [
{
"name": "foo",
"metrics": {"time ... | github_jupyter | import hatchet as ht
if __name__ == "__main__":
smallStr = [
{
"name": "foo",
"metrics": {"time (inc)": 130.0, "time": 0.0},
"children": [
{
"name": "bar",
"metrics": {"time (inc)": 20.0, "time": 5.0},
"children": [
... | 0.311008 | 0.831246 |
While taking the **Intro to Deep Learning with PyTorch** course by Udacity, I really liked exercise that was based on building a character-level language model using LSTMs. I was unable to complete all on my own since NLP is still a very new field to me. I decided to give the exercise a try with `tensorflow 2.0` and b... | github_jupyter | !pip install tensorflow-gpu==2.0.0-beta1
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
import numpy as np
from tensorflow.keras.preprocessing.sequence import pad_sequences
print(tf.__version__)
# Open text file and read in data as `text`
with open('anna.txt', 'r') as f:
text = f.read()
# Fi... | 0.701713 | 0.947381 |
# Using Strings in Python 3
[Python String docs](https://docs.python.org/3/library/string.html)
### Creating Strings
Enclose a string in single or double quotes, or in triple single quotes.
And you can embed single quotes within double quotes, or double quotes within single quotes.
```
s = 'Tony Stark is'
t = "Iron... | github_jupyter | s = 'Tony Stark is'
t = "Ironman."
print(s, t)
u = 'Her book is called "The Magician".'
print(u)
v = '''Captain Rogers kicks butt.'''
print(v)
print(type(s))
print(len(s))
print(s.split())
print(len(s.split()))
print(u.split('a'))
print('you,are,so,pretty'.split(','))
print(' '.join(['Just', 'do', 'it.']))
print('dog... | 0.137938 | 0.915053 |
## Example. Estimating the speed of light
Simon Newcomb's measurements of the speed of light, from
> Stigler, S. M. (1977). Do robust estimators work with real data? (with discussion). *Annals of
Statistics* **5**, 1055–1098.
The data are recorded as deviations from $24\ 800$
nanoseconds. Table 3.1 of Bayesian Dat... | github_jupyter | %matplotlib inline
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
import seaborn as sns
from scipy.optimize import brentq
plt.style.use('seaborn-darkgrid')
plt.rc('font', size=12)
%config Inline.figure_formats = ['retina']
numbs = "28 26 33 24 34 -44 27 16 40 -2 29 22 \
24 21... | 0.627495 | 0.955693 |
# Excepciones y gestión de errores
## Excepciones y errores
Hay dos tipos de errores en Python: Errores sintácticos y excepciones.
Los errores sintácticos se producen cuando escribimos algo que
el interprete de Python no es capaz de entender; por ejemplo, crear
una variable con un nombre no válido es un error sintáct... | github_jupyter | 7a = 7.0
a, b = 7, 0
c = a / b
try:
a, b = 7, 0
c = a / b
except ZeroDivisionError:
print("No puedo dividir por cero")
try:
...
except (RuntimeError, TypeError, NameError):
pass
def divide(x, y):
try:
result = x / y
print("el resultado es", result)
except ZeroDivisionErro... | 0.206814 | 0.970688 |
# Lecture 10: Variable Scope
CSCI 1360: Foundations for Informatics and Analytics
## Overview and Objectives
We've spoken a lot about data structures and orders of execution (loops, functions, and so on). But now that we're intimately familiar with different ways of blocking our code, we haven't yet touched on how t... | github_jupyter | def func(x):
print(x)
x = 10
func(20)
print(x)
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
copy = i
print(i) # Does this work?
print(copy) # What about this?
# This is a global variable. It can be accessed anywhere in this notebook.
a = 0
#... | 0.15511 | 0.983295 |
```
import numpy as np
import pandas as pd
import seaborn as sns
sns.reset_defaults
sns.set_style(style='darkgrid')
sns.set_context(context='notebook')
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
plt.rcParams["patch.force_edgecolor"] = True
plt.rcParams["figure.figsize"] = (20.0, 10.0)
pd.set_option('disp... | github_jupyter | import numpy as np
import pandas as pd
import seaborn as sns
sns.reset_defaults
sns.set_style(style='darkgrid')
sns.set_context(context='notebook')
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
plt.rcParams["patch.force_edgecolor"] = True
plt.rcParams["figure.figsize"] = (20.0, 10.0)
pd.set_option('display.... | 0.241489 | 0.5752 |
<center>
<h1>Accessing THREDDS using Siphon</h1>
<br>
<h3>25 July 2017
<br>
<br>
Ryan May (@dopplershift)
<br><br>
UCAR/Unidata<br>
</h3>
</center>
# What is Siphon?
* Python library for remote data access
* Focus on atmospheric and oceanic data sources
* Bulk of features focused on THREDDS
## Installing on Azure
... | github_jupyter | !conda install --name root siphon -y -c conda-forge
from siphon.catalog import TDSCatalog
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
for ref in top_cat.catalog_refs:
print(ref)
ref = top_cat.catalog_refs['Forecast Model Data']
ref.href
ref = top_cat.catalog_refs[0]
ref.href
new_cat = r... | 0.597138 | 0.902481 |
# Optimizing a function with probability simplex constraints
This notebook arose in response to a question on StackOverflow about how to optimize a function with probability simplex constraints in python (see http://stackoverflow.com/questions/32252853/optimization-with-python-scipy-optimize). This is a topic I've tho... | github_jupyter | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import minimize
def objective_function(x, y, gamma=0.2):
return -((x/y)**gamma).sum()**(1.0/gamma)
cons = ({'type': 'eq', 'fun': lambda x: np.array([sum(x) - 1])})
y = np.array([0.5, 0.3, 0.2])
initial_x = np.array([0.2, 0.... | 0.579757 | 0.993661 |
```
import os
import shutil
import zipfile
import urllib.request
def download_repo(url, save_to):
zip_filename = save_to + '.zip'
urllib.request.urlretrieve(url, zip_filename)
if os.path.exists(save_to):
shutil.rmtree(save_to)
with zipfile.ZipFile(zip_filename, 'r') as zip_ref:
zi... | github_jupyter | import os
import shutil
import zipfile
import urllib.request
def download_repo(url, save_to):
zip_filename = save_to + '.zip'
urllib.request.urlretrieve(url, zip_filename)
if os.path.exists(save_to):
shutil.rmtree(save_to)
with zipfile.ZipFile(zip_filename, 'r') as zip_ref:
zip_re... | 0.513912 | 0.564729 |
# Using GalFlow to perform FFT-based convolutions
```
import tensorflow as tf
import galflow as gf
import galsim
%pylab inline
# First let's draw a galaxy image with GalSim
data_dir='/usr/local/share/galsim/COSMOS_25.2_training_sample'
cat = galsim.COSMOSCatalog(dir=data_dir)
psf = cat.makeGalaxy(2, gal_type='real',... | github_jupyter | import tensorflow as tf
import galflow as gf
import galsim
%pylab inline
# First let's draw a galaxy image with GalSim
data_dir='/usr/local/share/galsim/COSMOS_25.2_training_sample'
cat = galsim.COSMOSCatalog(dir=data_dir)
psf = cat.makeGalaxy(2, gal_type='real', noise_pad_size=0).original_psf
gal = cat.makeGalaxy(2,... | 0.430746 | 0.708673 |
## 1.0 Import Function
```
from META_TOOLBOX import *
import VIGA_VERIFICA as VIGA_VER
```
## 2.0 Setup
```
SETUP = {'N_REP': 30,
'N_ITER': 100,
'N_POP': 1,
'D': 4,
'X_L': [0.25, 0.05, 0.05, 1/6.0],
'X_U': [0.65, 0.15, 0.15, 1/3.5],
'SIGMA': 0.15,
'ALPHA... | github_jupyter | from META_TOOLBOX import *
import VIGA_VERIFICA as VIGA_VER
SETUP = {'N_REP': 30,
'N_ITER': 100,
'N_POP': 1,
'D': 4,
'X_L': [0.25, 0.05, 0.05, 1/6.0],
'X_U': [0.65, 0.15, 0.15, 1/3.5],
'SIGMA': 0.15,
'ALPHA': 0.98,
'TEMP': None,
'STOP_CON... | 0.270769 | 0.751443 |
```
%matplotlib inline
```
# Net file
This is the Net file for the clique problem: state and output transition function definition
```
import tensorflow as tf
import numpy as np
def weight_variable(shape, nm):
'''function to initialize weights'''
initial = tf.truncated_normal(shape, stddev=0.1)
tf.sum... | github_jupyter | %matplotlib inline
import tensorflow as tf
import numpy as np
def weight_variable(shape, nm):
'''function to initialize weights'''
initial = tf.truncated_normal(shape, stddev=0.1)
tf.summary.histogram(nm, initial, collections=['always'])
return tf.Variable(initial, name=nm)
class Net:
'''class ... | 0.738575 | 0.843186 |
# YOLO v3 Finetuning on AWS
This series of notebooks demonstrates how to finetune pretrained YOLO v3 (aka YOLO3) using MXNet on AWS.
**This notebook** guides you on how to deploy the YOLO3 model trained in the previous module to the SageMaker endpoint using GPU instance.
**Follow-on** the content of the notebooks sh... | github_jupyter | %load_ext autoreload
%autoreload 1
# Built-Ins:
import os
import json
from datetime import datetime
from glob import glob
from pprint import pprint
from matplotlib import pyplot as plt
from base64 import b64encode, b64decode
# External Dependencies:
import mxnet as mx
import boto3
import imageio
import sagemaker
impo... | 0.314051 | 0.95452 |
<a href="https://colab.research.google.com/github/seyrankhademi/introduction2AI/blob/main/linear_vs_mlp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Computer Programming vs Machine Learning
This notebook is written by Dr. Seyran Khademi to f... | github_jupyter | # The weighet-sum function takes as an input the feature values for the applicant
# and outputs the final score.
import numpy as np
def weighted_sum(GPA,QP,Age,Loan):
#check that the points for GPA and QP are in range between 0 and 10
x=GPA
y=QP
points = np.array([x,y])
if (points < 0).all() and (points > ... | 0.651022 | 0.989501 |
# ¡Hola!
En este repositorio tendremos varios **datasets** para practicar:
1. La limpieza de datos
2. El procesamiento de datos
3. La visualización de datos
Puedes trabajar en este __Notebook__ si quieres pero te recomendamos utilizar una de nuestras planillas.
En el menú a la izquierda haz clic en el signo de +, una... | github_jupyter | # ¡Hola!
En este repositorio tendremos varios **datasets** para practicar:
1. La limpieza de datos
2. El procesamiento de datos
3. La visualización de datos
Puedes trabajar en este __Notebook__ si quieres pero te recomendamos utilizar una de nuestras planillas.
En el menú a la izquierda haz clic en el signo de +, una... | 0.199152 | 0.472927 |
```
import numpy as np
import xarray as xr
import hvplot.xarray
import glob
```
# only case1
```
fs = glob.glob('out0*.nc')
fs.sort()
for nn, f in enumerate(fs):
ds = xr.open_dataset(f)
U = ds['u'].values
V = ds['v'].values
Vmag = np.sqrt( U**2 + V**2) + 0.00000001
angle = (np.pi/2.) - np.arctan2(... | github_jupyter | import numpy as np
import xarray as xr
import hvplot.xarray
import glob
fs = glob.glob('out0*.nc')
fs.sort()
for nn, f in enumerate(fs):
ds = xr.open_dataset(f)
U = ds['u'].values
V = ds['v'].values
Vmag = np.sqrt( U**2 + V**2) + 0.00000001
angle = (np.pi/2.) - np.arctan2(U/Vmag, V/Vmag)
d... | 0.371479 | 0.820901 |
# Using `pymf6` Interactively
You can run a MODFLOW6 model interactively.
For example, in a Jupyter Notebook.
## Setup
First, change into the directory of your MODFLOW6 model:
```
%cd ../examples/ex02-tidal/
```
The directory `ex02-tidal` contains all files needed to run a MODFLOW6 model.
The model `ex02-tidal` is... | github_jupyter | %cd ../examples/ex02-tidal/
from pymf6.threaded import MF6
mf6 = MF6()
mf6.simulation.model_names
mf6.simulation.time_unit
mf6.simulation.time_multiplier
mf6.simulation.TDIS.NPER
mf6.simulation.TDIS.TOTALSIMTIME
mf6.simulation.TDIS.var_names
sim1 = mf6.simulation.solution_groups[0]
sim1
sim1.package_names
sim... | 0.211417 | 0.988863 |
```
def binary_search(nums, target):
p, r = 0, len(nums) - 1
while p <= r:
m = (p + r) // 2
if nums[m] == target:
return True
elif nums[m] > nums[0]:
p = m + 1
else:
r = m - 1
return False
```
# Search in Rotated Sorted Array
As a pivot, A... | github_jupyter | def binary_search(nums, target):
p, r = 0, len(nums) - 1
while p <= r:
m = (p + r) // 2
if nums[m] == target:
return True
elif nums[m] > nums[0]:
p = m + 1
else:
r = m - 1
return False
p, r = 0, len(A) - 1
while p + 1 < r and A[p] > A[r]:
... | 0.286668 | 0.898455 |
# Data analysis of Zenodo zip content
This [Jupyter Notebook](https://jupyter.org/) explores the data retrieved by [data-gathering](../data-gathering) workflows.
It assumes the `../../data` directory has been populated by the [Snakemake](https://snakemake.readthedocs.io/en/stable/) workflow [zenodo-random-samples-zip... | github_jupyter | !pwd
!ls data
!sha512sum data/seed
import requests
rec = requests.get("https://zenodo.org/api/records/14614").json()
rec
rec["files"][0]["type"] # File extension
rec["files"][0]["links"]["self"] # Download link
rec["metadata"]["access_right"] # "open" means we are allowed to download the above
rec["links"]["doi"] # ... | 0.325735 | 0.980562 |
## Data Split
```
import numpy as np
from matplotlib import pyplot as plt
import math
```
### Read the data
```
import pandas as pd
df_data = pd.read_csv('../data/2d_classification.csv')
data = df_data[['x','y']].values
label = df_data['label'].values
```
## Dividing the data into Train and Test data
- Using the ... | github_jupyter |
import numpy as np
from matplotlib import pyplot as plt
import math
import pandas as pd
df_data = pd.read_csv('../data/2d_classification.csv')
data = df_data[['x','y']].values
label = df_data['label'].values
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_t... | 0.372962 | 0.963057 |
<img src="https://maltem.com/wp-content/uploads/2020/04/LOGO_MALTEM.png" style="float: left; margin: 20px; height: 55px">
<br>
<br>
<br>
<br>
# Random Forests and ExtraTrees
_Authors: Matt Brems (DC), Riley Dallas (AUS)_
---
## Random Forests
---
With bagged decision trees, we generate many different trees on ... | github_jupyter | import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score
train = pd.read_csv('datasets/train.csv')
train.shape
test = pd.read_csv('datasets/test.csv')
train = train[train['Embark... | 0.271252 | 0.992518 |
# "Will the client subscribe?"
> "An Example of Applied Machine Learning"
- toc: true
- branch: master
- badges: true
- comments: true
- categories: [machine_learning, jupyter, ai]
- image: images/confusion.png
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
# In... | github_jupyter | import pandas as pd
banks_data = pd.read_csv('bank-full.csv', delimiter=';') # By default, the delimiter is ',' but this csv file uses ';' instead.
banks_data
banks_data.describe()
banks_data.drop(['duration'], inplace=True, axis=1)
banks_data.drop(['contact'], inplace=True, axis=1)
banks_data.loc[(banks_data['pdays'... | 0.593727 | 0.946843 |
# The Fourier Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Properti... | github_jupyter | %matplotlib inline
import sympy as sym
sym.init_printing()
def fourier_transform(x):
return sym.transforms._fourier_transform(x, t, w, 1, -1, 'Fourier')
def inverse_fourier_transform(X):
return sym.transforms._fourier_transform(X, w, t, 1/(2*sym.pi), 1, 'Inverse Fourier')
t, w = sym.symbols('t omega')
X = sy... | 0.706089 | 0.99311 |
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
from matplotlib import pyplot as plt
```
# Map1D_TM
---
```
from gpt.maps import Map1D_TM
cav = Map1D_TM('Buncher', 'fields/buncher_CTB_1D.gdf', frequency=1.3e9, scale=10... | github_jupyter | # Useful for debugging
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
from matplotlib import pyplot as plt
from gpt.maps import Map1D_TM
cav = Map1D_TM('Buncher', 'fields/buncher_CTB_1D.gdf', frequency=1.3e9, scale=10e6, relative_phase=0)
?cav
... | 0.533397 | 0.790692 |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
```
### Tensile Strength Example
#### Manual Solution (See code below for faster solution)
df(SSTR/SSB) = 4-1 = 3(Four different concentrations/samples)
df(SSE/SSW) = 4(6-1) ... | github_jupyter | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
alpha = 0.01
five_percent = [7,8,15,11,9,10]
ten_percent = [12,17,13,18,19,15]
fifteen_percent = [14,18,19,17,16,18]
twenty_percent = [19,25,22,23,18,20]
fig,ax = plt.subplots(fi... | 0.587352 | 0.850531 |
# Pyspark & Astrophysical data: IMAGE
Let's play with Image. In this example, we load an image data from a FITS file (CFHTLens), and identify sources with a simple astropy algorithm. The workflow is described below. For simplicity, we only focus on one CCD in this notebook. For full scale, see the pyspark [im2cat.py](... | github_jupyter | ## Import SparkSession from Spark
from pyspark.sql import SparkSession
## Create a DataFrame from the HDU data of a FITS file
fn = "../../src/test/resources/image.fits"
hdu = 1
df = spark.read.format("fits").option("hdu", hdu).load(fn)
## By default, spark-fits distributes the rows of the image
df.printSchema()
df.show... | 0.880045 | 0.986442 |
# Data Manipulation
It is impossible to get anything done if we cannot manipulate data. Generally, there are two important things we need to do with data: (i) acquire it and (ii) process it once it is inside the computer. There is no point in acquiring data if we do not even know how to store it, so let's get our hand... | github_jupyter | import torch
x = torch.arange(12, dtype=torch.float64)
x
# We can get the tensor shape through the shape attribute.
x.shape
# .shape is an alias for .size(), and was added to more closely match numpy
x.size()
x = x.reshape((3, 4))
x
torch.FloatTensor(2, 3)
torch.Tensor(2, 3)
torch.empty(2, 3)
torch.zeros((2, 3, 4))... | 0.644113 | 0.994129 |
# Object-oriented programming
This notebooks contains assingments that are more complex. They are aimed at students who already know about [object- oriented Programming](https://en.wikipedia.org/wiki/Object-oriented_programming) from prior experience and who are familiar with the concepts but now how OOP is done on Py... | github_jupyter | class Person:
def __init__(self, name):
pass
def say_hi(self):
pass
persons = []
joe = Person("Joe")
jane = Person("Jane")
persons.append(joe)
persons.append(jane)
# the reference to Person on the line below means that the object inherits
# Employee
class Employee(Person):
def __init... | 0.387922 | 0.974239 |
# Introduction
Moto: "garbage in, garbage out". Feeding dirty data into a model will give results that are meaningless. Steps for improving data quality:
1. Getting the data - this is rather easy since the texts are pre-uploded.
2. Cleaning the data - use popular text pre-processing techniques.
3. Organizing the ... | github_jupyter | import pandas as pd
pd.set_option('max_colwidth', 150)
corpus = pd.DataFrame(columns=['text', 'author'])
corpora_size = 0
authors = {
'Ivan Vazov': ['/content/drive/MyDrive/Colab Notebooks/project/data/vazov_separated/Ivan_Vazov_-_Pod_igoto_-_1773-b.txt',
'/content/drive/MyDrive/Colab Notebooks/... | 0.384912 | 0.828176 |
```
import numpy as np
import pandas as pd
import glob
from astropy.table import Table
import matplotlib.pyplot as plt
import json
import collections
import astropy
spectra_contsep_j193747_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_16_15_RCB-J193747.txt", format = "ascii")
spectra_robot_... | github_jupyter | import numpy as np
import pandas as pd
import glob
from astropy.table import Table
import matplotlib.pyplot as plt
import json
import collections
import astropy
spectra_contsep_j193747_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_16_15_RCB-J193747.txt", format = "ascii")
spectra_robot_j193... | 0.500732 | 0.488283 |
## GANs
Credits: \
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html \
https://jovian.ai/aakashns/06-mnist-gan
```
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as... | github_jupyter | from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as tran... | 0.838548 | 0.843638 |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.... | github_jupyter | <table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.... | 0.712932 | 0.969985 |
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/My Drive/Colab Notebooks/Udacity/deep-learning-v2-pytorch/convolutional-neural-networks/conv-visualization')
```
# Maxpooling Layer
In this notebook, we add and visualize the output of a maxpooling layer in a CNN.
A ... | github_jupyter | from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/My Drive/Colab Notebooks/Udacity/deep-learning-v2-pytorch/convolutional-neural-networks/conv-visualization')
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here b... | 0.6488 | 0.862757 |

# Introduction to boundary conditions in terrainbento.
## Overview
This tutorial shows example usage of the terrainbento boundary handlers. For comprehensive information about all options and defaults, refer to the [documentation](http://terrainbento.readthedocs... | github_jupyter | import numpy as np
np.random.seed(42)
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import holoviews as hv
hv.notebook_extension('matplotlib')
from terrainbento import Basic
basic_params = {
# create the Clock.
"clock": {
"sta... | 0.584508 | 0.937038 |
To start this Jupyter Dash app, please run all the cells below. Then, click on the **temporary** URL at the end of the last cell to open the app.
```
!pip install -q jupyter-dash==0.3.0rc1 dash-bootstrap-components transformers
import time
import dash
import dash_html_components as html
import dash_core_components as... | github_jupyter | !pip install -q jupyter-dash==0.3.0rc1 dash-bootstrap-components transformers
import time
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
from jupyter_dash import JupyterDash
from transformers ... | 0.727395 | 0.54468 |
# Identifying Bees Using Crowd Sourced Data using Amazon SageMaker
### Table of contents
1. [Introduction to dataset](#introduction)
2. [Labeling with Amazon SageMaker Ground Truth](#groundtruth)
3. [Reviewing labeling results](#review)
4. [Training an Object Detection model](#training)
5. [Review of Training Results... | github_jupyter | !wget http://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DIG-TF-200-MLBEES-10-EN/dataset.zip
!unzip -qo dataset.zip
!unzip -l dataset.zip | tail -20
# S3 bucket must be created in us-west-2 (Oregon) region
BUCKET = 'denisb-sagemaker-oregon'
PREFIX = 'input' # this is the root path to your working space, feel to u... | 0.48438 | 0.960805 |

大家好,我是 CDA 曹鑫。
我的 Github 地址:https://github.com/imcda 。
我的邮箱:caoxin@cda.cn 。
这节课跟大家讲讲 Pandas。
# Pandas 介绍
要使用pandas,首先需要了解他主要两个数据结构:Series和DataFrame。
# Series
```
import pandas as pd
print(pd.__version_... | github_jupyter | import pandas as pd
print(pd.__version__)
import pandas as pd
import numpy as np
s = pd.Series([1,3,6,np.nan,44,1])
print(s)
dates = pd.date_range('20160101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=['a','b','c','d'])
print(df)
print(df['b'])
df1 = pd.DataFrame(np.arange(12).reshape((... | 0.138607 | 0.917525 |
```
from google.colab import drive
drive.mount('/content/drive')
```
Importing all the dependencies
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2, ResNet50, VGG19
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.... | github_jupyter | from google.colab import drive
drive.mount('/content/drive')
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2, ResNet50, VGG19
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.... | 0.673621 | 0.855489 |
# GCM Filters Tutorial
## Synthetic Data
In this example, we are going to work with "synthetic data"; data we made up for the sake of keeping the example simple and self-contained.
### Create Input Data
Gcm-filters uses Xarray DataArrays for its inputs and outputs. So we will first import xarray (and numpy).
```
i... | github_jupyter | import gcm_filters
import numpy as np
import xarray as xr
nt, ny, nx = (10, 128, 256)
data = np.random.rand(nt, ny, nx)
da = xr.DataArray(data, dims=['time', 'y', 'x'])
da
mask_data = np.ones((ny, nx))
mask_data[(ny // 4):(3 * ny // 4), (nx // 4):(3 * nx // 4)] = 0
wet_mask = xr.DataArray(mask_data, dims=['y', 'x'])
... | 0.318061 | 0.981364 |
# Compute viable habitat in geographic space
Viable habitat is computed as the convolution of trait space with environmental conditions.
```
%load_ext autoreload
%autoreload 2
import json
import os
import shutil
from itertools import product
import data_collections as dc
import funnel
import intake
import matplotlib... | github_jupyter | %load_ext autoreload
%autoreload 2
import json
import os
import shutil
from itertools import product
import data_collections as dc
import funnel
import intake
import matplotlib.pyplot as plt
import metabolic as mi
import numpy as np
import operators as ops
import util
import xarray as xr
import yaml
curator = util.cur... | 0.414662 | 0.625152 |
```
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers, optimizers, models
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from pandas.plotting import register_matplotlib_converters
%matplotlib inline
register_matplot... | github_jupyter | import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers, optimizers, models
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from pandas.plotting import register_matplotlib_converters
%matplotlib inline
register_matplotlib_... | 0.699254 | 0.822688 |
# EnergyPlus Output Data Analysis Example
Created by Clayton Miller (miller.clayton@arch.ethz.ch)
The goal of this notebook is to give a user a glimpse at the loading and manipulation of a .csv output of EnergyPlus
Execute the cells in this notebook one at a time and try to understand what each code snippet is d... | github_jupyter | import pandas as pd
import datetime
from datetime import timedelta
import time
%matplotlib inline
def loadsimdata(file,pointname,ConvFactor):
df = pd.read_csv(file)
df['desiredpoint'] = df[pointname]*ConvFactor
df.index = eplustimestamp(df)
pointdf = df['desiredpoint']
return pointdf
SimulationDat... | 0.417509 | 0.966914 |
... ***CURRENTLY UNDER DEVELOPMENT*** ...
## Simulate Astronomical Tide using U-tide library
inputs required:
* Astronomical Tide historical time series at the study site
in this notebook:
* Tidal armonic analysis based on U-tide library
### Workflow:
<div>
<img src="resources/nb01_03.png" width="300px">... | github_jupyter | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# basic import
import os
import os.path as op
# python libs
import numpy as np
import xarray as xr
from datetime import datetime, timedelta
import matplotlib
# custom libs
import utide # https://github.com/wesleybowman/UTide
# DEV: override installed teslakit
import sys... | 0.468061 | 0.807461 |
```
import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.stats.multitest as smm
```
Данные для этой задачи взяты из исследования, проведенного в Stanfor... | github_jupyter | import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.stats.multitest as smm
data = pd.read_csv('gene_high_throughput_sequencing.csv')
data.head()
sns.ba... | 0.159315 | 0.920218 |
# Python En Pocos Pasos: Ejercicios
Este es un ejercicio para evaluar su comprensión de los Fundamentos de Python.
## Ejercicios
Responda las preguntas o complete las tareas que se detallan en negrita a continuación, use el método específico descrito, si corresponde.
** ¿Cuánto es 7 a la potencia de 4?**
** Divid... | github_jupyter | lst = [1,2,[3,4],[5,[100,200,['hola']],23,11],1,7]
d = {'c1':[1,2,3,{'truco':['oh','hombre','incepción',{'destino':[1,2,3,'hola']}]}]}
# La tupla es
obtenerDominio('usuario@dominio.com')
encontrarPerro('¿Hay algún perro por ahí?')
contarPerro('Este perro corre más rápido que el otro perro')
seq = ['sopa', 'perro'... | 0.07671 | 0.91957 |
<a href="https://colab.research.google.com/github/increpare/tatoeba_toki_pona_spellcheck/blob/main/tatoeba_turkish_spellcheck.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Script that automatically downloads the tatoeba Turkish corpus and recommen... | github_jupyter | #@title (Replacement recommendation rules are hidden here.)
replacements = """
...çca -> ...çça
...çce -> ...ççe
...çci -> ...ççi
...çcı -> ...ççı
...çcu -> ...ççu
...çcü -> ...ççü
...çda -> ...çta
...çdan -> ...çtan
...çde -> ...çte
...çden -> ...çten
...fca -> ...fça
...fce -> ...fçe
...fci -> ...fçi
...fcı -> ...fçı... | 0.080357 | 0.94887 |
## High and Low Pass Filters
Now, you might be wondering, what makes filters high and low-pass; why is a Sobel filter high-pass and a Gaussian filter low-pass?
Well, you can actually visualize the frequencies that these filters block out by taking a look at their fourier transforms. The frequency components of any im... | github_jupyter | import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Define gaussian, sobel, and laplacian (edge) filters
gaussian = (1/9)*np.array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
sobel_x= np.array([[-1, 0, 1],
[-2, 0, 2],
... | 0.354433 | 0.988591 |
```
!pip3 install torch torchnlp torchvision
import re
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Embedding, Dropout
from keras.preprocessing.... | github_jupyter | !pip3 install torch torchnlp torchvision
import re
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Embedding, Dropout
from keras.preprocessing.text... | 0.202089 | 0.212385 |
# 100 numpy exercises
This is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.
If you find an... | github_jupyter | import numpy as np
print(np.__version__)
np.show_config()
Z = np.zeros(10)
print(Z)
Z = np.zeros((10,10))
print("%d bytes" % (Z.size * Z.itemsize))
%run `python -c "import numpy; numpy.info(numpy.add)"`
Z = np.zeros(10)
Z[4] = 1
print(Z)
Z = np.arange(10,50)
print(Z)
Z = np.arange(50)
Z = Z[::-1]
print(Z)
Z = n... | 0.415373 | 0.965996 |
```
!apt-get install p7zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/train.tsv.7z
!unzip -o ../input/mercari-price-suggestion-challenge/sample_submission_stg2.csv.zip
!unzip -o ../input/mercari-price-suggestion-challenge/test_stg2.tsv.zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/tes... | github_jupyter | !apt-get install p7zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/train.tsv.7z
!unzip -o ../input/mercari-price-suggestion-challenge/sample_submission_stg2.csv.zip
!unzip -o ../input/mercari-price-suggestion-challenge/test_stg2.tsv.zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/test.ts... | 0.29584 | 0.151906 |
# Wavelets in Jupyter Notebooks
> A notebook to show off the power of fastpages and jupyter.
- toc: false
- branch: master
- badges: true
- comments: true
- categories: [wavelets, jupyter]
- image: images/some_folder/your_image.png
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: m... | github_jupyter | # This is a comment
import numpy as np
import pandas as pd
from scipy.fftpack import fft
import matplotlib.pyplot as plt
import pywt
def plot_wavelet(time, signal, scales,
# waveletname = 'cmor1.5-1.0',
waveletname = 'gaus5',
cmap = plt.cm.seismic,
t... | 0.478285 | 0.912709 |
<a href="https://colab.research.google.com/github/paulowe/ml-lambda/blob/main/colab-train1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Import packages
```
import sklearn
import pandas as pd
import numpy as np
import csv as csv
from sklearn.m... | github_jupyter | import sklearn
import pandas as pd
import numpy as np
import csv as csv
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn.externals import joblib
from sklearn.preprocessing impo... | 0.615897 | 0.972753 |
# Periodic Motion: Kinematic Exploration of Pendulum
Working with observations to develop a conceptual representation of periodic motion in the context of a pendulum.
### Dependencies
This is my usual spectrum of dependencies that seem to be generally useful. We'll see if I need additional ones. When needed I will u... | github_jupyter | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import default_rng
rng = default_rng()
conceptX = [-5., 0., 5., 0., -5., 0]
conceptY = [-.6, 0.,0.6,0.,-0.6,0.]
conceptTheta = [- 15. , 0., 15., 0., -15.,0.]
conceptTime = [0., 1.,2.,3.,4.,5.]
fig, ax = plt.subplots()
ax.scatter(c... | 0.611614 | 0.978426 |
# 「tflite micro」であそぼう!
## 元ノートブック:[@dansitu](https://twitter.com/dansitu)
### 日本語バーション:[@proppy](https://twitter.com/proppy])
# 「tflite micro」ってなんだ?
- マイコンで「tflite」が動く事

- https://github.com/tens... | github_jupyter | ! python -m pip install --pre tensorflow
! python -m pip install matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 200
import numpy as np
import math
import matplotlib.pyplot as plt
x_values = np.random.uniform(low=0, high=2*math.pi, size=1000)
np.random.shuffle(x_values)
y_... | 0.561335 | 0.948917 |
```
%matplotlib inline
```
A Gentle Introduction to ``torch.autograd``
---------------------------------
``torch.autograd`` is PyTorch’s automatic differentiation engine that powers
neural network training. In this section, you will get a conceptual
understanding of how autograd helps a neural network train.
Backgr... | github_jupyter | %matplotlib inline
import torch, torchvision
model = torchvision.models.resnet18(pretrained=True)
data = torch.rand(1, 3, 64, 64)
labels = torch.rand(1, 1000)
prediction = model(data) # forward pass
loss = (prediction - labels).sum()
loss.backward() # backward pass
optim = torch.optim.SGD(model.parameters(), lr=1e-... | 0.861101 | 0.992393 |
# Exploratory Data Analysis
## Import libraries
```
import pandas as pd
```
### Load data
```
df = pd.read_csv('Train.csv', sep=';')
df.head()
len(df)
df['opinion'].str.len().mean()
df['opinion'].str.len().max()
df['opinion'].str.len().min()
df['opinion'].str.len().hist(bins=200)
len(df[df['opinion'].str.len() < 10... | github_jupyter | import pandas as pd
df = pd.read_csv('Train.csv', sep=';')
df.head()
len(df)
df['opinion'].str.len().mean()
df['opinion'].str.len().max()
df['opinion'].str.len().min()
df['opinion'].str.len().hist(bins=200)
len(df[df['opinion'].str.len() < 1000])
df[df['opinion'].str.len() < 1000]['opinion'].str.len().hist(bins=200)
d... | 0.361616 | 0.72777 |
# Deploy machine learning models to Azure
description: (preview) deploy your machine learning or deep learning model as a web service in the Azure cloud.
## Connect to your workspace
```
from azureml.core import Workspace
# get workspace configurations
ws = Workspace.from_config()
# get subscription and resourcegr... | github_jupyter | from azureml.core import Workspace
# get workspace configurations
ws = Workspace.from_config()
# get subscription and resourcegroup from config
SUBSCRIPTION_ID = ws.subscription_id
RESOURCE_GROUP = ws.resource_group
RESOURCE_GROUP, SUBSCRIPTION_ID
!az account set -s $SUBSCRIPTION_ID
!az ml workspace list --resource-... | 0.477554 | 0.809314 |
# Financial Planning with APIs and Simulations
In this Challenge, you’ll create two financial analysis tools by using a single Jupyter notebook:
Part 1: A financial planner for emergencies. The members will be able to use this tool to visualize their current savings. The members can then determine if they have enough... | github_jupyter | # Import the required libraries and dependencies
import os
import requests
import json
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
%matplotlib inline
# Load the environment variables from the .env file
#by calling the load_dotenv func... | 0.729327 | 0.989254 |
```
import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
indicepanel = pd.DataFrame.from_csv('../data/indice/indicepanel.csv')
indicepanel.head()
Train = indicepanel.iloc[-2000:-1000, :]
Test = i... | github_jupyter | import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
indicepanel = pd.DataFrame.from_csv('../data/indice/indicepanel.csv')
indicepanel.head()
Train = indicepanel.iloc[-2000:-1000, :]
Test = indic... | 0.393152 | 0.77535 |
<a href="https://colab.research.google.com/github/chadeowen/DS-Sprint-03-Creating-Professional-Portfolios/blob/master/ChadOwen_DS_Unit_1_Sprint_Challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 1 Sprint Challenge 3
# C... | github_jupyter | <a href="https://colab.research.google.com/github/chadeowen/DS-Sprint-03-Creating-Professional-Portfolios/blob/master/ChadOwen_DS_Unit_1_Sprint_Challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 1 Sprint Challenge 3
# C... | 0.63273 | 0.747017 |
# Grid Search
Let's incorporate grid search into your modeling process. To start, include an import statement for `GridSearchCV` below.
```
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.st... | github_jupyter | import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.ensemble import RandomForestClassifier... | 0.583085 | 0.790369 |
# Aries Basic Controller Example - Alice
# DID Exchange - Inviter
In this notebook we'll be initiating the aries [DID Exchange](https://github.com/hyperledger/aries-rfcs/tree/master/features/0023-did-exchange) protocol using the aries_basic_controller package.
This notebook has the following phases:
1. Pull in depe... | github_jupyter | %autoawait
import time
import asyncio
from aries_basic_controller.aries_controller import AriesAgentController
WEBHOOK_HOST = "0.0.0.0"
WEBHOOK_PORT = 8022
WEBHOOK_BASE = ""
ADMIN_URL = "http://alice-agent:8021"
# Based on the aca-py agent you wish to control
agent_controller = AriesAgentController(webhook_host=... | 0.243193 | 0.889529 |
```
%matplotlib inline
```
Tensors
--------------------------------------------
Tensors are a specialized data structure that are very similar to arrays
and matrices. In PyTorch, we use tensors to encode the inputs and
outputs of a model, as well as the model’s parameters.
Tensors are similar to NumPy’s ndarrays, e... | github_jupyter | %matplotlib inline
import torch
import numpy as np
data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)
x_data.dtype
x_data
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
np_array
x_np
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = tor... | 0.671255 | 0.983518 |
# Example 04: General Use of XGBoostRegressorHyperOpt
[](https://colab.research.google.com/github/slickml/slick-ml/blob/master/examples/optimization/example_04_XGBoostRegressorrHyperOpt.ipynb)
### Google Colab Configuration
```
# !git clone htt... | github_jupyter | # !git clone https://github.com/slickml/slick-ml.git
# %cd slick-ml
# !pip install -r requirements.txt
# # Change path to project root
%cd ../..
%load_ext autoreload
# widen the screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# change the pa... | 0.461259 | 0.925432 |
```
import pandas
df1 = pandas.read_csv('GSE6631-Upregulated.csv')
df2 = pandas.read_csv('GSE113282-Upregulated.csv')
df3 = pandas.read_csv('GSE12452-Upregulated50.csv')
a = []
aa = []
b = []
c = []
common = []
for i in range(0, len(df1)):
a.append(df1['Gene'][i])
aa.append(df1['Gene'][i])
for i in range(0... | github_jupyter | import pandas
df1 = pandas.read_csv('GSE6631-Upregulated.csv')
df2 = pandas.read_csv('GSE113282-Upregulated.csv')
df3 = pandas.read_csv('GSE12452-Upregulated50.csv')
a = []
aa = []
b = []
c = []
common = []
for i in range(0, len(df1)):
a.append(df1['Gene'][i])
aa.append(df1['Gene'][i])
for i in range(0, le... | 0.021543 | 0.141133 |
```
%matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 4.5))
plt.subplots_adjust(left=0.02, right=0.98, top=0.98, bottom=0.00)
m = Basemap(projection='robin',lon_0=0,resolution='c')
m.fillcontinents(color='gray',lake_color='white')
m.drawcoastlines(... | github_jupyter | %matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 4.5))
plt.subplots_adjust(left=0.02, right=0.98, top=0.98, bottom=0.00)
m = Basemap(projection='robin',lon_0=0,resolution='c')
m.fillcontinents(color='gray',lake_color='white')
m.drawcoastlines()
pl... | 0.633977 | 0.774157 |
# Spherical coordinates in shenfun
The Helmholtz equation is given as
$$
-\nabla^2 u + \alpha u = f.
$$
In this notebook we will solve this equation on a unitsphere, using spherical coordinates. To verify the implementation we use a spherical harmonics function as manufactured solution.
We start the implementation... | github_jupyter | from shenfun import *
from shenfun.la import SolverGeneric1ND
import sympy as sp
r = 1
theta, phi = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
N, M = 256, 256
L0 = FunctionSpace(N, 'L', domain=(0, np.pi))
F1 = FunctionSpace(M, 'F'... | 0.360264 | 0.989119 |
# Examples of basic allofplos functions
```
import datetime
from allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois)
from allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois,
get_all_plos_dois)
from allofplos.co... | github_jupyter | import datetime
from allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois)
from allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois,
get_all_plos_dois)
from allofplos.corpus.plos_corpus import (get_uncorrected_proo... | 0.385375 | 0.660049 |
# 250-D Multivariate Normal
Let's go for broke here.
## Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
```
# Python 3 compatability
from __future__ import division, print_function
from builtins impo... | github_jupyter | # Python 3 compatability
from __future__ import division, print_function
from builtins import range
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
import math
from numpy import linalg
# inline plotting
%matplotlib inline
# plotting
import matplotlib
f... | 0.685423 | 0.874023 |
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a m... | github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data a... | 0.67694 | 0.989582 |
```
import os,sys
import numpy as np
import yaml
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import math
```
## Source Matrix
### Parameters
```
with open('configure.yml','r') as conf_para:
conf_para = yaml.load(conf_para,Loader=yaml.FullLoader)
```
### wavefront_initialize
```
def wav... | github_jupyter | import os,sys
import numpy as np
import yaml
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import math
with open('configure.yml','r') as conf_para:
conf_para = yaml.load(conf_para,Loader=yaml.FullLoader)
def wavefront_initialize(pixelsize_x = 55e-06,pixelsize_y=55e-06,fs_size = 2000,ss_size... | 0.569972 | 0.733786 |
```
"""
RDF generator for the PREDICT drug indication gold standard (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls)
@version 1.0
@author Remzi Celebi
"""
import pandas as pd
from csv import reader
from src.util import utils
from src.util.utils import Dataset, DataResource
from rdflib impor... | github_jupyter | """
RDF generator for the PREDICT drug indication gold standard (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls)
@version 1.0
@author Remzi Celebi
"""
import pandas as pd
from csv import reader
from src.util import utils
from src.util.utils import Dataset, DataResource
from rdflib import Gr... | 0.591015 | 0.330971 |
# Семинар 1. Python, numpy
## Starter-pack (не для курса)
👒 Разберитесь с гитхабом. Клонируйте себе [репо](https://github.com/AsyaKarpova/ml_nes_2021) нашего курса. Необязательные [советы](https://t.me/KarpovCourses/213) по оформлению.
👒 [Leetcode](https://leetcode.com/problemset/all/https://leetcode.com/problems... | github_jupyter | # code
# code
candy_prices = [35.4, 26.7, -33.8, 41.9, -100, 25]
# code
import random
def get_number():
return random.randrange(17, 35)
# code
ids = ['id1', 'id2', 'id30', 'id3','id100', 'id22']
# code
elems = [1, 2, 3, 'b||']
# code
# code
# code
from typing import List
def runningSum(nums: List[int]) -> Li... | 0.425128 | 0.966028 |
```
import matplotlib.pyplot as plt
import numpy as np
import sys;
import power_law_analysis as pl
import glob
import pandas as pd
import power_spectrum as pow_spec
import scipy.interpolate as interpolate
ptomm = 216/1920 # px to mm factor for Samsung T580
def load_trace(filename):
d = pd.read_csv(filename, sep="... | github_jupyter | import matplotlib.pyplot as plt
import numpy as np
import sys;
import power_law_analysis as pl
import glob
import pandas as pd
import power_spectrum as pow_spec
import scipy.interpolate as interpolate
ptomm = 216/1920 # px to mm factor for Samsung T580
def load_trace(filename):
d = pd.read_csv(filename, sep=" ", ... | 0.426083 | 0.512937 |
### Loading and combining data
```
#importing libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
#reading the files as dataframes
df1 = pd.read_csv('ml_case_training_data.csv')
df2 = pd.read_csv('ml_case_training_hist_data.csv')
df3 = pd.read_csv('ml_case_training_... | github_jupyter | #importing libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
#reading the files as dataframes
df1 = pd.read_csv('ml_case_training_data.csv')
df2 = pd.read_csv('ml_case_training_hist_data.csv')
df3 = pd.read_csv('ml_case_training_output.csv')
pd.DataFrame({"Missing ... | 0.279927 | 0.767712 |
```
#default_exp database
%load_ext autoreload
%autoreload 2
```
# database
> helpers to get and query a sqlalchemy engine for DB containing metadata on experiments
```
#export
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, select
import pandas as pd
import getp... | github_jupyter | #default_exp database
%load_ext autoreload
%autoreload 2
#export
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, select
import pandas as pd
import getpass
import json
#export
def get_db_engine(username, password, ip_adress, model_name, rdbms="mysql"):
"""
... | 0.379953 | 0.637384 |
# Chapter 7. 텍스트 문서의 범주화 - (6) CNN 모델을 이용한 다중 클래스 분류
- 이제 다중 클래스 분류에 동일한 모델을 적용해 보자.
- 이를 위해 20 NewsGroup 데이터 세트를 사용한다.
- 20 NewsGroup 데이터는 함수 sklearn 함수 호출로 가져오므로 별도로 다운받을 필요 없음
- 모델은 학습된 GloVe 모델로 임베딩 초기화만 적용한다
```
import os
import config
from dataloader.loader import Loader
from preprocessing.utils import Prep... | github_jupyter | import os
import config
from dataloader.loader import Loader
from preprocessing.utils import Preprocess, remove_empty_docs
from dataloader.embeddings import GloVe
from model.cnn_document_model import DocumentModel, TrainingParameters
from keras.callbacks import ModelCheckpoint, EarlyStopping
import numpy as np
from ker... | 0.486575 | 0.913638 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.insert(0, '/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/src')
import read_video
control0 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsR... | github_jupyter | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.insert(0, '/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/src')
import read_video
control0 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-... | 0.12954 | 0.384017 |
```
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9... | github_jupyter | from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-... | 0.513668 | 0.431105 |
```
import gym
import math
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import time as t
from gym import envs
envids = [spec.id for spec in envs.registry.all()]... | github_jupyter | import gym
import math
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import time as t
from gym import envs
envids = [spec.id for spec in envs.registry.all()]
'''... | 0.805058 | 0.615897 |
```
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Homework 7
**Instructions:** Complete the notebook below. Download the completed notebook in HTML format. Upload assignment using Canvas.
**Due:** Feb. 23 at **2pm.**
... | github_jupyter | import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline | 0.193109 | 0.993314 |
```
import metpy.calc as mpcalc
from metpy.units import units
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.feature import ShapelyFeature,Natural... | github_jupyter | import metpy.calc as mpcalc
from metpy.units import units
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.feature import ShapelyFeature,NaturalEart... | 0.543348 | 0.409398 |
```
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Bidirectional, Dropout
from sk... | github_jupyter | import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Bidirectional, Dropout
from sklear... | 0.487307 | 0.285445 |
```
def pow(x, n, I, mult):
# https://sahandsaba.com/five-ways-to-calculate-fibonacci-numbers-with-python-code.html
"""
Returns x to the power of n. Assumes I to be identity relative to the
multiplication given by mult, and n to be a positive integer.
"""
if n == 0:
return I
elif n =... | github_jupyter | def pow(x, n, I, mult):
# https://sahandsaba.com/five-ways-to-calculate-fibonacci-numbers-with-python-code.html
"""
Returns x to the power of n. Assumes I to be identity relative to the
multiplication given by mult, and n to be a positive integer.
"""
if n == 0:
return I
elif n == 1:... | 0.47658 | 0.791942 |
# Logging
We can track events in a software application, this is known as **logging**. Let’s start with a simple example, we will log a warning message.
As opposed to just printing the errors, logging can be configured to disable output or save to a file. This is a big advantage to simple printing the errors.
```
im... | github_jupyter | import logging
# print a log message to the console.
logging.warning('This is a warning!')
import logging
logging.basicConfig(filename='program.log',level=logging.DEBUG)
logging.warning('An example message.')
logging.warning('Another message')
logging.basicConfig(level=logging.DEBUG)
logging.basicConfig(level=logg... | 0.348202 | 0.922062 |
(IN)=
# 1.7 Integración Numérica
```{admonition} Notas para contenedor de docker:
Comando de docker para ejecución de la nota de forma local:
nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.
`docker run --rm -v <ruta a mi directorio>:/... | github_jupyter |
---
Nota generada a partir de la [liga1](https://www.dropbox.com/s/jfrxanjls8kndjp/Diferenciacion_e_Integracion.pdf?dl=0) y [liga2](https://www.dropbox.com/s/k3y7h9yn5d3yf3t/Integracion_por_Monte_Carlo.pdf?dl=0).
En lo siguiente consideramos que las funciones del integrando están en $\mathcal{C}^2$ en el conjunt... | 0.742795 | 0.898009 |
# Use Case 1: Kögur
In this example we will subsample a dataset stored on SciServer using methods resembling field-work procedures.
Specifically, we will estimate volume fluxes through the [Kögur section](http://kogur.whoi.edu) using (i) mooring arrays, and (ii) ship surveys.
```
# Import oceanspy
import oceanspy as o... | github_jupyter | # Import oceanspy
import oceanspy as ospy
# Import additional packages used in this notebook
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Start client
from dask.distributed import Client
client = Client()
client
# Open dataset stored on SciServer.
od = ospy.open_oceandataset.from_c... | 0.639961 | 0.982691 |
# CS229: Problem Set 4
## Problem 4: Independent Component Analysis
**C. Combier**
This iPython Notebook provides solutions to Stanford's CS229 (Machine Learning, Fall 2017) graduate course problem set 3, taught by Andrew Ng.
The problem set can be found here: [./ps4.pdf](ps4.pdf)
I chose to write the solutions to... | github_jupyter |
First, let's set up the environment and write helper functions:
- ```normalize``` ensures all mixes have the same volume
- ```load_data``` loads the mix
- ```play``` plays the audio using ```sounddevice```
Next we write a numerically stable sigmoid function, to avoid overflows:
The following functions calculate... | 0.80147 | 0.98752 |
# Helm 201
A deep-dive into Helm (v3) and details like
* Templating
* Charts and Subcharts
* Usage and internal structure in Kubernetes
* Integrations
```
wd_init = "work/helm-init2"
!helm version
```
---
---
## Init
* Create a new template / Helm Chart
```
!echo $wd_init
!mkdir -p $wd_init
!helm create $wd_ini... | github_jupyter | wd_init = "work/helm-init2"
!helm version
!echo $wd_init
!mkdir -p $wd_init
!helm create $wd_init/demo-helm-201
!tree $wd_init
!cat $wd_init/demo-helm-201/Chart.yaml | grep -B2 -i 'version:'
!echo "Render template and generate Kubernetes resource files"
!helm template demo-helm-201-common $wd_init/demo-helm-201 -... | 0.227727 | 0.702304 |
Lorenz equations as a model of atmospheric convection:
This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters (σ, β, ρ) are varied.
x˙ = σ(y−x)
y˙ = ρx−y−xz
z˙ = −βz+xy
The Lorenz equations also arise in simplified models for lasers, d... | github_jupyter | %matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
#Compu... | 0.702836 | 0.981875 |
## Neural Networks
- This was adopted from the PyTorch Tutorials.
- http://pytorch.org/tutorials/beginner/pytorch_with_examples.html
## Neural Networks
- Neural networks are the foundation of deep learning, which has revolutionized the
```In the mathematical theory of artificial neural networks, the universal app... | github_jupyter |
### Generate Fake Data
- `D_in` is the number of dimensions of an input varaible.
- `D_out` is the number of dimentions of an output variable.
- Here we are learning some special "fake" data that represents the xor problem.
- Here, the dv is 1 if either the first or second variable is
### A Simple Neural Network
... | 0.802981 | 0.989899 |
# Image Operators And Transforms
Nina Miolane, UC Santa Barbara
<center><img src="figs/02_main.png" width=1200px alt="default"/></center>
# Last Lecture
- **01: Image Formation Models (Ch. 2)**
- 02: Image Operators and Transforms (Ch. 3)
- 03: Feature Detection, Matching, Segmentation (Ch. 7)
- 04: Image Alignment... | github_jupyter | from skimage import data, io
image = data.astronaut()
print(type(image))
print(image.shape)
# io.imshow(image);
io.imshow(image[10:300, 50:200, 2]);
from skimage import data, io
image = data.astronaut()
image = image / 255
gain = 1.8 # a
bias = 0. # b
mult_image = gain * image + bias
io.imshow(mult_image);
fr... | 0.299003 | 0.991489 |
# Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
i... | github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform =... | 0.799011 | 0.989791 |
### Regression: Predicting continuous labels
In contrast with the discrete labels of a classification algorithm, we will next look at a simple *regression* task in which the labels are continuous quantities.
Consider the data shown in the following figure, which consists of a set of points each with a continuous labe... | github_jupyter | from IPython.display import Pretty as disp
hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/docs/hints/' # path to hints on GitHub
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
df = pd.read_csv('https://raw.githubusercontent.com/soltan... | 0.341802 | 0.991883 |
```
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import scipy.io
from tensorflow.python.framework import function
import os, re
import claude.utils as cu
import claude.tx as tx
import claude.claudeflow.autoencoder as ae
import claude.claudeflow.helper... | github_jupyter | %matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import scipy.io
from tensorflow.python.framework import function
import os, re
import claude.utils as cu
import claude.tx as tx
import claude.claudeflow.autoencoder as ae
import claude.claudeflow.helper as ... | 0.525369 | 0.354461 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.