markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Three-point centered-difference formula for second derivative $f''(x) = \frac{f(x - h) - 2f(x) + f(x + h)}{h^2} - \frac{h^2}{12}f^{(iv)}(c)$ for some $c$ between $x - h$ and $x + h$ Rounding error Example Approximate the derivative of $f(x) = e^x$ at $x = 0$
# Parameters f = lambda x : math.exp(x) real_value = 1 h_msg = "$10^{-%d}$" twp_deri_x1 = lambda x, h : ( f(x + h) - f(x) ) / h thp_deri_x1 = lambda x, h : ( f(x + h) - f(x - h) ) / (2 * h) data = [ ["h", "$f'(x) \\approx \\frac{e^{x+h} - e^x}{h}$", "error", "$f'(x) \\approx \\frac{e^{x+h} - e^{x...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Extrapolation for order n formula $ Q \approx \frac{2^nF(h/2) - F(h)}{2^n - 1} $
sym.init_printing(use_latex=True) x = sym.Symbol('x') dx = sym.diff(sym.exp(sym.sin(x)), x) Math('Derivative : %s' %sym.latex(dx) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.2 Newton-Cotes Formulas For Numerical Integration Trapezoid Rule $\int_{x_0}^{x_1} f(x) dx = \frac{h}{2}(y_0 + y_1) - \frac{h^3}{12}f''(c)$ where $h = x_1 - x_0$ and $c$ is between $x_0$ and $x_1$ Simpson's Rule $\int_{x_0}^{x_2} f(x) dx = \frac{h}{3}(y_0 + 4y_1 + y_2) - \frac{h^5}{90}f^{(iv)}(c)$ where $h = x_2 - x_...
# Apply Trapezoid Rule trapz = scipy.integrate.trapz([np.log(1), np.log(2)], [1, 2]) # Evaluate the error term of Trapezoid Rule sym_x = sym.Symbol('x') expr = sym.diff(sym.log(sym_x), sym_x, 2) trapz_err = abs(expr.subs(sym_x, 1).evalf() / 12) # Print out results print('Trapezoid rule : %f and upper bound error : %f...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Composite Trapezoid Rule $\int_{a}^{b} f(x) dx = \frac{h}{2} \left ( y_0 + y_m + 2\sum_{i=1}^{m-1}y_i \right ) - \frac{(b-a)h^2}{12}f''(c)$ where $h = (b - a) / m $ and $c$ is between $a$ and $b$ Composite Simpson's Rule $ \int_{a}^{b}f(x)dx = \frac{h}{3}\left [ y_0 + y_{2m} + 4\sum_{i=1}^{m}y_{2i-1} + 2\sum_{i=1}^{m -...
# Apply composite Trapezoid Rule x = np.linspace(1, 2, 5) y = np.log(x) trapz = scipy.integrate.trapz(y, x) # Error term sym_x = sym.Symbol('x') expr = sym.diff(sym.log(sym_x), sym_x, 2) trapz_err = abs( (2 - 1) * pow(0.25, 2) / 12 * expr.subs(sym_x, 1).evalf() ) print('Trapezoid Rule : %f, error = %f' %(trapz, trapz...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Midpoint Rule $ \int_{x_0}^{x_1} f(x)dx = hf(\omega) + \frac{h^3}{24}f''(c) $ where $ h = (x_1 - x_0) $, $\omega$ is the midpoint $ x_0 + h / 2 $, and $c$ is between $x_0$ and $x_1$ Composite Midpoint Rule $ \int_{a}^{b} f(x) dx = h \sum_{i=1}^{m}f(\omega_{i}) + \frac{(b - a)h^2}{24} f''(c) $ where $h = (b - a) / m$ an...
# Parameters m = 10 h = (1 - 0) / m f = lambda x : np.sin(x) / x mids = np.arange(0 + h/2, 1, h) # Apply composite midpoint rule area = h * np.sum(f(mids)) # Error term sym_x = sym.Symbol('x') expr = sym.diff(sym.sin(sym_x) / sym_x, sym_x, 2) mid_err = abs( (1 - 0) * pow(h, 2) / 24 * expr.subs(sym_x, 1).evalf() ) # ...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.3 Romberg Integration
def romberg(f, a, b, step): R = np.zeros(step * step).reshape(step, step) R[0][0] = (b - a) * (f(a) + f(b)) / 2 for j in range(1, step): h = (b - a) / pow(2, j) summ = 0 for i in range(1, pow(2, j - 1) + 1): summ += h * f(a + (2 * i - 1) * h) R[j][0] = 0.5 * R[j -...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Apply Romberg Integration to approximate $\int_{1}^{2} \ln{x}dx$
f = lambda x : np.log(x) result = romberg(f, 1, 2, 4) print('Romberg Integration : %f' %(result) ) f = lambda x : np.log(x) result = scipy.integrate.romberg(f, 1, 2, show=True) print('Romberg Integration : %f' %(result) )
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.4 Adaptive Quadrature
''' Use Trapezoid Rule ''' def adaptive_quadrature(f, a, b, tol): return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0) def adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep): c = (a + b) / 2 S = lambda x, y : (y - x) * (f(x) + f(y)) / 2 if abs( S(a, b) - S(a, c) - S(c, b)...
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Use Adaptive Quadrature to approximate the integral $ \int_{-1}^{1} (1 + \sin{e^{3x}}) dx $
f = lambda x : 1 + np.sin(np.exp(3 * x)) val = adaptive_quadrature(f, -1, 1, tol=1e-12) print(val)
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
5.5 Gaussian Quadrature
poly = scipy.special.legendre(2) # Find roots of polynomials comp = scipy.linalg.companion(poly) roots = scipy.linalg.eig(comp)[0]
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Approximate $\int_{-1}^{1} e^{-\frac{x^2}{2}}dx$ using Gaussian Quadrature
f = lambda x : np.exp(-np.power(x, 2) / 2) quad = scipy.integrate.quadrature(f, -1, 1) print(quad[0]) # Parametes a = -1 b = 1 deg = 3 f = lambda x : np.exp( -np.power(x, 2) / 2 ) x, w = scipy.special.p_roots(deg) # Or use numpy.polynomial.legendre.leggauss quad = np.sum(w * f(x)) print(quad)
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Example Approximate the integral $\int_{1}^{2} \ln{x} dx$ using Gaussian Quadrature
# Parametes a = 1 b = 2 deg = 4 f = lambda t : np.log( ((b - a) * t + b + a) / 2) * (b - a) / 2 x, w = scipy.special.p_roots(deg) np.sum(w * f(x))
5_Numerical_Differentiation_And_Integration.ipynb
Jim00000/Numerical-Analysis
unlicense
Second step Initialize COMPSs runtime Parameters indicates if the execution will generate task graph, tracefile, monitor interval and debug information. The parameter taskCount is a work around for the dot generation of the legend
ipycompss.start(graph=True, trace=True, debug=True, project_xml='../project.xml', resources_xml='../resources.xml', mpi_worker=True)
tests/sources/python/9_jupyter_notebook/src/simple_mpi.ipynb
mF2C/COMPSs
apache-2.0
I'd like to make this figure better - easier to tell which rows people are on Save Notebook
%%bash jupyter nbconvert --to slides Exploring_Data.ipynb && mv Exploring_Data.slides.html ../notebook_slides/Exploring_Data_v2.slides.html jupyter nbconvert --to html Exploring_Data.ipynb && mv Exploring_Data.html ../notebook_htmls/Exploring_Data_v2.html cp Exploring_Data.ipynb ../notebook_versions/Exploring_Data_v2....
notebook_versions/Exploring_Data_v2.ipynb
walkon302/CDIPS_Recommender
apache-2.0
2. Read in the hanford.csv file
cd C:\Users\Harsha Devulapalli\Desktop\algorithms\class6 df=pd.read_csv("data/hanford.csv")
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
ledeprogram/algorithms
gpl-3.0
6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
df.plot(kind="scatter",x="Exposure",y="Mortality") plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red") r = df.corr()['Exposure']['Mortality'] r*r
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
ledeprogram/algorithms
gpl-3.0
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
def predictor(exposure): return intercept+float(exposure)*slope predictor(10)
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
ledeprogram/algorithms
gpl-3.0
分词 任务越少,速度越快。如指定仅执行分词,默认细粒度:
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
执行粗颗粒度分词:
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
同时执行细粒度和粗粒度分词:
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok*')
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
coarse为粗分,fine为细分。 注意 Native API的输入单位限定为句子,需使用多语种分句模型或基于规则的分句函数先行分句。RESTful同时支持全文、句子、已分词的句子。除此之外,RESTful和native两种API的语义设计完全一致,用户可以无缝互换。 自定义词典 自定义词典为分词任务的成员变量,要操作自定义词典,先获取分词任务,以细分标准为例:
tok = HanLP['tok/fine'] tok
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
自定义词典为分词任务的成员变量:
tok.dict_combine, tok.dict_force
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
HanLP支持合并和强制两种优先级的自定义词典,以满足不同场景的需求。 不挂词典:
tok.dict_force = tok.dict_combine = None HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
强制模式 强制模式优先输出正向最长匹配到的自定义词条(慎用,详见《自然语言处理入门》第二章):
tok.dict_force = {'和服', '服务项目'} HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
与大众的朴素认知不同,词典优先级最高未必是好事,极有可能匹配到不该分出来的自定义词语,导致歧义。自定义词语越长,越不容易发生歧义。这启发我们将强制模式拓展为强制校正功能。 强制校正原理相似,但会将匹配到的自定义词条替换为相应的分词结果:
tok.dict_force = {'和服务': ['和', '服务']} HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
合并模式 合并模式的优先级低于统计模型,即dict_combine会在统计模型的分词结果上执行最长匹配并合并匹配到的词条。一般情况下,推荐使用该模式。
tok.dict_force = None tok.dict_combine = {'和服', '服务项目'} HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
需要算法基础才能理解,初学者可参考《自然语言处理入门》。 空格单词 含有空格、制表符等(Transformer tokenizer去掉的字符)的词语需要用tuple的形式提供:
tok.dict_combine = {('iPad', 'Pro'), '2个空格'} HanLP("如何评价iPad Pro ?iPad Pro有2个空格", tasks='tok/fine')['tok/fine']
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
聪明的用户请继续阅读,tuple词典中的字符串其实等价于该字符串的所有可能的切分方式:
dict(tok.dict_combine.config["dictionary"]).keys()
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
单词位置 HanLP支持输出每个单词在文本中的原始位置,以便用于搜索引擎等场景。在词法分析中,非语素字符(空格、换行、制表符等)会被剔除,此时需要额外的位置信息才能定位每个单词:
tok.config.output_spans = True sent = '2021 年\nHanLPv2.1 为生产环境带来次世代最先进的多语种NLP技术。' word_offsets = HanLP(sent, tasks='tok/fine')['tok/fine'] print(word_offsets)
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
返回格式为三元组(单词,单词的起始下标,单词的终止下标),下标以字符级别计量。
for word, begin, end in word_offsets: assert word == sent[begin:end]
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
hankcs/HanLP
apache-2.0
Reading Dataset Idea here is to read the files in the dataset to extract data for training, testing and the corresponding (activity) labels for them. The outcome is a set of numpy arrays for each set.
# Paths and filenames DATASET_PATH = "../dataset/UCI HAR/UCI HAR Dataset" TEST_RELPATH = "/test" TRAIN_RELPATH = "/train" VARS_FILENAMES = [ 'body_acc_x_', 'body_acc_y_', 'body_acc_z_', 'body_gyro_x_', 'body_gyro_y_', 'body_gyro_z_', 'total_acc_x_', 'total_acc_y_', 'total_acc_z_'] ...
sources/TempScript.ipynb
francof2a/APC
gpl-3.0
Filtered plots
activityToPlot = 2.0 fig = plt.figure() plt.figure(figsize=(16,8)) plt.title(label_dict[activityToPlot]) for idx, activity in enumerate(labelsTrain): if activityToPlot == activity: plt.plot(dataTrain[4,idx,:]) plt.show()
sources/TempScript.ipynb
francof2a/APC
gpl-3.0
RNN first tries
numLayers = 50; lstm_cell = tf.contrib.rnn.BasicRNNCell(numLayers) lstm_cell
sources/TempScript.ipynb
francof2a/APC
gpl-3.0
We will generate a test set of 50 "bombs", and each "bomb" will be run through a 20-step measurement circuit. We set up the program as explained in previous examples.
# Use local qasm simulator backend = Aer.get_backend('qasm_simulator') # Use the IBMQ Quantum Experience # backend = least_busy(IBMQ.backends()) N = 50 # Number of bombs steps = 20 # Number of steps for the algorithm, limited by maximum circuit depth eps = np.pi / steps # Algorithm parameter, small # Prototype circu...
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Generating a random bomb is achieved by simply applying a Hadamard gate to a $q_1$, which starts in $|0\rangle$, and then measuring. This randomly gives a $0$ or $1$, each with equal probability. We run one such circuit for each bomb, since circuits are currently limited to a single measurement.
# Quantum circuits to generate bombs qc = [] circuits = ["IFM_gen"+str(i) for i in range(N)] # NB: Can't have more than one measurement per circuit for circuit in circuits: IFM = QuantumCircuit(q_gen, c_gen, name=circuit) IFM.h(q_gen[0]) #Turn the qubit into |0> + |1> IFM.measure(q_gen[0], c_gen[0]) qc....
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Note that, since we want to measure several discrete instances, we do not want to average over multiple shots. Averaging would yield partial bombs, but we assume bombs are discretely either live or dead.
result = execute(qc, backend=backend, shots=1).result() # Note that we only want one shot bombs = [] for circuit in qc: for key in result.get_counts(circuit): # Hack, there should only be one key, since there was only one shot bombs.append(int(key)) #print(', '.join(('Live' if bomb else 'Dud' for bomb in bo...
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Testing the Bombs Here we implement the algorithm described above to measure the bombs. As with the generation of the bombs, it is currently impossible to take several measurements in a single circuit - therefore, it must be run on the simulator.
# Use local qasm simulator backend = Aer.get_backend('qasm_simulator') qc = [] circuits = ["IFM_meas"+str(i) for i in range(N)] #Creating one measurement circuit for each bomb for i in range(N): bomb = bombs[i] IFM = QuantumCircuit(q, c, name=circuits[i]) for step in range(steps): IFM.ry(eps, q[0])...
community/terra/qis_adv/vaidman_detection_test.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Hyperparameters
# Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.25
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_wit...
# Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels = tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_...
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Training
def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') ...
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training.
# Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f)
gan_mnist/Intro_to_GANs_Exercises.ipynb
AndysDeepAbstractions/deep-learning
mit
Compute and visualize statistics tfdv.generate_statistics_from_csv로 데이터 분포 생성 많은 데이터일 경우 내부적으로 Apache Beam을 사용해 병렬처리 Beam의 PTransform과 결합 가능
train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
tfdv.visualize_statistics를 사용해 시각화, 내부적으론 Facets을 사용한다 함 numeric, categorical feature들을 나눔
tfdv.visualize_statistics(train_stats)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Infer a scahema 데이터를 통해 스키마 추론 tfdv.infer_schema tfdv.display_schema
schema = tfdv.infer_schema(statistics=train_stats) tfdv.display_schema(schema=schema)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
평가 데이터 에러 체크 train, validation에서 다른 데이터들이 있음 캐글할 때 유용할듯
# Compute stats for evaluation data eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA) # Compare evaluation data with training data tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats, lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Check for evaluation anomalies train 데이터엔 없었는데 validation에 생긴 데이터 있는지 확인
# Check eval data for errors by validating the eval data stats using the previously inferred schema. anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema) tfdv.display_anomalies(anomalies)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Fix evaluation anomalies in the schema 수정
# Relax the minimum fraction of values that must come from the domain for feature company. company = tfdv.get_feature(schema, 'company') company.distribution_constraints.min_domain_mass = 0.9 # Add new value to the domain of feature payment_type. payment_type_domain = tfdv.get_domain(schema, 'payment_type') payment_ty...
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Schema Environments serving할 때도 스키마 체크해야 함 Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies)
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Int value가 있음 => Float으로 수정
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True) serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options) serving_anomalies = tfdv.validate_statistics(serving_stats, schema) tfdv.display_anomalies(serving_anomalies) # All features are by default in both TRAINING an...
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Check for drift and skew Drift Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive...
# Add skew comparator for 'payment_type' feature. payment_type = tfdv.get_feature(schema, 'payment_type') payment_type.skew_comparator.infinity_norm.threshold = 0.01 # Add drift comparator for 'company' feature. company=tfdv.get_feature(schema, 'company') company.drift_comparator.infinity_norm.threshold = 0.001 skew_...
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
Freeze the schema 스키마 저장
from tensorflow.python.lib.io import file_io from google.protobuf import text_format file_io.recursive_create_dir(OUTPUT_DIR) schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt') tfdv.write_schema_text(schema, schema_file) !cat {schema_file}
Tensorflow-Extended/TFDV(data validation) example.ipynb
zzsza/TIL
mit
轴向连接 DataFrame 中有很丰富的merge方法,此外还有一种数据合并运算被称作连接(concatenation)、binding、stacking。 在Numpy中,也有concatenation函数。
import numpy as np arr1 = np.arange(12).reshape(3,4) print(arr1) np.concatenate([arr1, arr1], axis=1)
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
对于pandas对象,带有标签的轴使我们能够进一步推广数组的连接运算。 pandas中的concate函数提供了一些功能,来操作这种合并运算 下方这个例子中,有三个series,这三个series的索引没有重叠,我们来看看,concate是如何给出合并运算的。
import pandas as pd seri1 = pd.Series([-1,2], index=list('ab')) seri2 = pd.Series([2,3,4], index=list('cde')) seri3 = pd.Series([5,6], index=list('fg')) print(seri1) print(seri2) print(seri3) print(seri1) pd.concat([seri1,seri2,seri3])
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
By default,concat是在axis=0上工作的,最终产生一个全新的Series。如果传入axis=1,那么结果就会成为一个 DataFrame (axis=1 是列)
pd.concat([seri1, seri2, seri3],axis=1, sort=False) pd.concat([seri1, seri2, seri3],axis=1, sort=False,join='inner') # 传入 inner,得到并集,该处并集为none seri4 = pd.concat([seri1*5, seri3]) print(seri4) seri4 = pd.concat([seri1*5, seri3],axis=1, join='inner') print(seri4)
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
Appendix during writing this note define a function which returns a string with repeated letters
# Ref: https://stackoverflow.com/questions/38273353/how-to-repeat-individual-characters-in-strings-in-python def special_sign(sign, times): # sign is string, times is integer str_list = sign*times new_str = ''.join([i for i in str_list]) return(new_str) print(special_sign('*',20))
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
yw-fang/readingnotes
apache-2.0
Next, we write a function to construct the factor graphs and prepare labels for training. For each factor graph instance, the structure is a chain but the number of nodes and edges depend on the number of letters, where unary factors will be added for each letter, pairwise factors will be added for each pair of neighbo...
def prepare_data(x, y, ftype, num_samples): """prepare FactorGraphFeatures and FactorGraphLabels """ from shogun import Factor, TableFactorType, FactorGraph from shogun import FactorGraphObservation, FactorGraphLabels, FactorGraphFeatures samples = FactorGraphFeatures(num_samples) labels = FactorGr...
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
In Shogun, we implemented several batch solvers and online solvers. Let's first try to train the model using a batch solver. We choose the dual bundle method solver (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDualLibQPBMSOSVM.html">DualLibQPBMSOSVM</a>) [2], since in practice it is slightly fa...
from shogun import DualLibQPBMSOSVM from shogun import BmrmStatistics import pickle import time # create bundle method SOSVM, there are few variants can be chosen # BMRM, Proximal Point BMRM, Proximal Point P-BMRM, NCBM # usually the default one i.e. BMRM is good enough # lambda is set to 1e-2 bmrm = DualLibQPBMSOSVM(...
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
In our case, we have 101 active cutting planes, which is much less than 4082, i.e. the number of parameters. We could expect a good model by looking at these statistics. Now come to the online solvers. Unlike the cutting plane algorithms re-optimizes over all the previously added dual variables, an online solver will u...
from shogun import StochasticSOSVM # the 3rd parameter is do_weighted_averaging, by turning this on, # a possibly faster convergence rate may be achieved. # the 4th parameter controls outputs of verbose training information sgd = StochasticSOSVM(model, labels, True, True) sgd.set_num_iter(100) sgd.set_lambda(0.01) ...
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
Inference Next, we show how to do inference with the learned model parameters for a given data point.
# get testing data samples_ts, labels_ts = prepare_data(p_ts, l_ts, ftype_all, n_ts_samples) from shogun import FactorGraphFeatures, FactorGraphObservation, TREE_MAX_PROD, MAPInference # get a factor graph instance from test data fg0 = samples_ts.get_sample(100) fg0.compute_energies() fg0.connect_components() # crea...
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0
Evaluation In the end, we check average training error and average testing error. The evaluation can be done by two methods. We can either use the apply() function in the structured output machine or use the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSOSVMHelper.html">SOSVMHelper</a>.
from shogun import LabelsFactory, SOSVMHelper # training error of BMRM method bmrm.set_w(w_bmrm) model.w_to_fparams(w_bmrm) lbs_bmrm = bmrm.apply() acc_loss = 0.0 ave_loss = 0.0 for i in xrange(n_tr_samples): y_pred = lbs_bmrm.get_label(i) y_truth = labels.get_label(i) acc_loss = acc_loss + model.delta_loss(y_truth...
doc/ipython-notebooks/structure/FGM.ipynb
cfjhallgren/shogun
gpl-3.0