markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
(1b) Pluralize and test Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solutio...
# TODO: Replace <FILL IN> with appropriate code def makePlural(word): """Adds an 's' to `word`. Note: This is a simple function that only adds an 's'. No attempt is made to follow proper pluralization rules. Args: word (str): A string. Returns: str: A string with 's' ...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(1c) Apply makePlural to the base RDD Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.
# TODO: Replace <FILL IN> with appropriate code pluralRDD = wordsRDD.map(makePlural) print pluralRDD.collect() # TEST Apply makePlural to the base RDD(1c) Test.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'], 'incorrect values for pluralRDD')
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(1d) Pass a lambda function to map Let's create the same RDD using a lambda function.
# TODO: Replace <FILL IN> with appropriate code pluralLambdaRDD = wordsRDD.map(lambda word: word + 's') print pluralLambdaRDD.collect() # TEST Pass a lambda function to map (1d) Test.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'], 'incorrect values for pluralLam...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(1e) Length of each word Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.
# TODO: Replace <FILL IN> with appropriate code pluralLengths = (pluralRDD .map(lambda word: len(word)) .collect()) print pluralLengths # TEST Length of each word (1e) Test.assertEquals(pluralLengths, [4, 9, 4, 4, 4], 'incorrect values for pluralLengths')
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(1f) Pair RDDs The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('&lt;word&gt;', 1) for each word element in th...
# TODO: Replace <FILL IN> with appropriate code wordPairs = wordsRDD.map(lambda word: (word, 1)) print wordPairs.collect() # TEST Pair RDDs (1f) Test.assertEquals(wordPairs.collect(), [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)], 'incorrect value for wordPairs')
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
Part 2: Counting with pair RDDs Now, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others. A naive approach would be to collect() all of the elements and count them in the driver program. While this approach ...
# TODO: Replace <FILL IN> with appropriate code # Note that groupByKey requires no parameters wordsGrouped = wordPairs.groupByKey() for key, value in wordsGrouped.collect(): print '{0}: {1}'.format(key, list(value)) # TEST groupByKey() approach (2a) Test.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(2b) Use groupByKey() to obtain the counts Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator. Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.
# TODO: Replace <FILL IN> with appropriate code wordCountsGrouped = wordsGrouped.map(lambda (k,v): (k, sum(v))) print wordCountsGrouped.collect() # TEST Use groupByKey() to obtain the counts (2b) Test.assertEquals(sorted(wordCountsGrouped.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)], ...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(2c) Counting using reduceByKey A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of t...
# TODO: Replace <FILL IN> with appropriate code # Note that reduceByKey takes in a function that accepts two values and returns a single value wordCounts = wordPairs.reduceByKey(lambda a,b: a+b) print wordCounts.collect() # TEST Counting using reduceByKey (2c) Test.assertEquals(sorted(wordCounts.collect()), [('cat', ...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(2d) All together The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.
# TODO: Replace <FILL IN> with appropriate code wordCountsCollected = (wordsRDD .map(lambda word: (word, 1)) .reduceByKey(lambda a,b: a+b) .collect()) print wordCountsCollected # TEST All together (2d) Test.assertEquals(sorted(wordCountsCollected), [...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
Part 3: Finding unique words and a mean value (3a) Unique words Calculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier.
# TODO: Replace <FILL IN> with appropriate code uniqueWords = wordsRDD.map(lambda word: (word, 1)).distinct().count() print uniqueWords # TEST Unique words (3a) Test.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords')
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(3b) Mean using reduce Find the mean number of words per unique word in wordCounts. Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.
# TODO: Replace <FILL IN> with appropriate code from operator import add totalCount = (wordCounts .map(lambda (a,b): b) .reduce(add)) average = totalCount / float(wordCounts.distinct().count()) print totalCount print round(average, 2) # TEST Mean using reduce (3b) Test.assertEquals(round(a...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
Part 4: Apply word count to a file In this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data. (4a) wordCount function First, ...
# TODO: Replace <FILL IN> with appropriate code def wordCount(wordListRDD): """Creates a pair RDD with word counts from an RDD of words. Args: wordListRDD (RDD of str): An RDD consisting of words. Returns: RDD of (str, int): An RDD consisting of (word, count) tuples. """ return (wo...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(4b) Capitalization and punctuation Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are: Words should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word). All punctuation should be remov...
# TODO: Replace <FILL IN> with appropriate code import re def removePunctuation(text): """Removes punctuation, changes to lower case, and strips leading and trailing spaces. Note: Only spaces, letters, and numbers should be retained. Other characters should should be eliminated (e.g. it's beco...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(4c) Load a text file For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the...
# Just run this code import os.path baseDir = os.path.join('data') inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt') fileName = os.path.join(baseDir, inputPath) shakespeareRDD = (sc .textFile(fileName, 8) .map(removePunctuation)) print '\n'.join(shakespeareRDD ...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(4d) Words from lines Before we can use the wordcount() function, we have to address two issues with the format of the RDD: The first issue is that that we need to split each line by its spaces. The second issue is we need to filter out empty lines. Apply a transformation that will split each element of the RDD...
# TODO: Replace <FILL IN> with appropriate code shakespeareWordsRDD = shakespeareRDD.flatMap(lambda a: a.split(" ")) shakespeareWordCount = shakespeareWordsRDD.count() print shakespeareWordsRDD.top(5) print shakespeareWordCount # TEST Words from lines (4d) # This test allows for leading spaces to be removed either bef...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(4e) Remove empty elements The next step is to filter out the empty elements. Remove all entries where the word is ''.
# TODO: Replace <FILL IN> with appropriate code shakeWordsRDD = shakespeareWordsRDD.filter(lambda word: len(word) > 0) shakeWordCount = shakeWordsRDD.count() print shakeWordCount # TEST Remove empty elements (4e) Test.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount')
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
(4f) Count the words We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of t...
# TODO: Replace <FILL IN> with appropriate code top15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, lambda (a,b): -b) print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts)) # TEST Count the words (4f) Test.assertEquals(top15WordsAndCounts, [(u'the', 27361), (u'an...
Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
mit
Let's start by downloading the data:
# Note: Linux bash commands start with a "!" inside those "ipython notebook" cells DATA_PATH = "data/" !pwd && ls os.chdir(DATA_PATH) !pwd && ls !python download_dataset.py !pwd && ls os.chdir("..") !pwd && ls DATASET_PATH = DATA_PATH + "UCI HAR Dataset/" print("\n" + "Dataset is now located at: " + DATASET_PATH)
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Preparing dataset:
TRAIN = "train/" TEST = "test/" # Load "X" (the neural network's training and testing inputs) def load_X(X_signals_paths): X_signals = [] for signal_type_path in X_signals_paths: file = open(signal_type_path, 'r') # Read dataset from disk, dealing with text files' syntax X_signal...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Additionnal Parameters: Here are some core parameter definitions for the training. For example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps.
# Input Data training_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie) test_data_count = len(X_test) # 2947 testing series n_steps = len(X_train[0]) # 128 timesteps per series n_input = len(X_train[0][0]) # 9 input parameters per timestep # LSTM Neural Network's internal st...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Utility functions for training:
def LSTM_RNN(_X, _weights, _biases): # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters. # Moreover, two LSTM cells are stacked which adds deepness to the neural network. # Note, some code of this notebook is inspired from an slightly different # RNN architectu...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Let's get serious and build the neural network:
# Graph input/output x = tf.placeholder(tf.float32, [None, n_steps, n_input]) y = tf.placeholder(tf.float32, [None, n_classes]) # Graph weights weights = { 'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights 'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Hooray, now train the neural network:
# To keep track of training's performance test_losses = [] test_accuracies = [] train_losses = [] train_accuracies = [] # Launch the graph sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True)) init = tf.global_variables_initializer() sess.run(init) # Perform Training steps with "batch_size" a...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Training is good, but having visual insight is even better: Okay, let's plot this simply in the notebook for now.
# (Inline plots: ) %matplotlib inline font = { 'family' : 'Bitstream Vera Sans', 'weight' : 'bold', 'size' : 18 } matplotlib.rc('font', **font) width = 12 height = 12 plt.figure(figsize=(width, height)) indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size)) plt.plo...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
And finally, the multi-class confusion matrix and metrics!
# Results predictions = one_hot_predictions.argmax(1) print("Testing Accuracy: {}%".format(100*accuracy)) print("") print("Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted"))) print("Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted"))) print...
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Conclusion Outstandingly, the final accuracy is of 91%! And it can peak to values such as 93.25%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly. This means that the neural networks is almost always able to correctly ide...
# Let's convert this notebook to a README automatically for the GitHub project's title page: !jupyter nbconvert --to markdown LSTM.ipynb !mv LSTM.md README.md
LSTM.ipynb
guillaume-chevalier/LSTM-Human-Activity-Recognition
mit
Gap robust allan deviation comparison Compute the GRADEV of a white phase noise. Compares two different scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.
def example1(): """ Compute the GRADEV of a white phase noise. Compares two different scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV. """ N = 1000 f = 1 y = np.random.randn(1,N)[0,:] x = np.linspace(1,len(y),len(y)) x_ax, y_ax, err_l,err_h, ns = allan.grad...
examples/gradev-demo.ipynb
telegraphic/allantools
gpl-3.0
White phase noise Compute the GRADEV of a nonstationary white phase noise.
def example2(): """ Compute the GRADEV of a nonstationary white phase noise. """ N=1000 # number of samples f = 1 # data samples per second s=1+5/N*np.arange(0,N) y=s*np.random.randn(1,N)[0,:] x = np.linspace(1,len(y),len(y)) x_ax, y_ax, err_l, err_h, ns = allan.gradev(y,f,x) plt...
examples/gradev-demo.ipynb
telegraphic/allantools
gpl-3.0
Partial Dependence Plot During the talk, Youtube: PyData - Random Forests Best Practices for the Business World, one of the best practices that the speaker mentioned when using tree-based models is to check for directional relationships. When using non-linear machine learning algorithms, such as popular tree-based mode...
# we download the training data and store it # under the `data` directory data_dir = Path('data') data_path = data_dir / 'train.csv' data = pd.read_csv(data_path) print('dimension: ', data.shape) print('features: ', data.columns) data.head() # some naive feature engineering data['Age'] = data['Age'].fillna(data['Age']...
model_selection/partial_dependence/partial_dependence.ipynb
ethen8181/machine-learning
mit
Aforementioned, tree-based models lists out the top important features, but it is not clear whether they have a positive or negative impact on the result. This is where tools such as partial dependence plots can aid us communicate the results better to others.
from partial_dependence import PartialDependenceExplainer plt.rcParams['figure.figsize'] = 16, 9 # we specify the feature name and its type to fit the partial dependence # result, after fitting the result, we can call .plot to visualize it # since this is a binary classification model, when we call the plot # method,...
model_selection/partial_dependence/partial_dependence.ipynb
ethen8181/machine-learning
mit
Hopefully, we can agree that the partial dependence plot makes intuitive sense, as for the categorical feature Sex, 1 indicates that the passenger was a male. And we know that during the titanic accident, the majority of the survivors were female passenger, thus the plot is telling us male passengers will on average ha...
# centered = True is actually the default pd_explainer.plot(centered = True, target_class = 1) plt.show()
model_selection/partial_dependence/partial_dependence.ipynb
ethen8181/machine-learning
mit
We can perform the same process for numerical features such as Fare. We know that more people from the upper class survived, and people from the upper class generally have to pay more Fare to get onboard the titanic. The partial dependence plot below also depicts this trend.
pd_explainer.fit(data, feature_name = 'Fare', feature_type = 'num') pd_explainer.plot(target_class = 1) plt.show()
model_selection/partial_dependence/partial_dependence.ipynb
ethen8181/machine-learning
mit
Решение. Ясно, что нам нужна модель $y=\theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3$.
n = 9 # Размер выборки k = 4 # Количество параметров
statistics/hw-13/hw-13.3.ipynb
eshlykov/mipt-day-after-day
unlicense
Рассмотрим отклик.
Y = numpy.array([3.9, 5.0, 5.7, 6.5, 7.1, 7.6, 7.8, 8.1, 8.4]).reshape(n, 1) print(Y)
statistics/hw-13/hw-13.3.ipynb
eshlykov/mipt-day-after-day
unlicense
Рассмотрим регрессор.
x = numpy.array([4.0, 5.2, 6.1, 7.0, 7.9, 8.6, 8.9, 9.5, 9.9]) X = numpy.ones((n, k)) X[:, 1] = x X[:, 2] = x ** 2 X[:, 3] = x ** 3 print(X)
statistics/hw-13/hw-13.3.ipynb
eshlykov/mipt-day-after-day
unlicense
Воспользуемся классической формулой для получения оценки.
Theta = inv(X.T @ X) @ X.T @ Y print(Theta)
statistics/hw-13/hw-13.3.ipynb
eshlykov/mipt-day-after-day
unlicense
Построим график полученной функции и нанесем точки выборки.
x = numpy.linspace(3.5, 10.4, 1000) y = Theta[0] + x * Theta[1] + x ** 2 * Theta[2] + x ** 3 * Theta[3] matplotlib.pyplot.figure(figsize=(20, 8)) matplotlib.pyplot.plot(x, y, color='turquoise', label='Предсказание', linewidth=2.5) matplotlib.pyplot.scatter(X[:, 1], Y, s=40.0, label='Выборка', color='blue', alpha=0.5) ...
statistics/hw-13/hw-13.3.ipynb
eshlykov/mipt-day-after-day
unlicense
<a id='loa'></a> 1. Loading and Inspection Load the demo data
dp = hs.load('./data/02/polymorphic_nanowire.hdf5') dp
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Set data type, scale intensity range and set calibration
dp.data = dp.data.astype('float64') dp.data *= 1 / dp.data.max()
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Inspect metadata
dp.metadata
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Plot an interactive virtual image to inspect data
roi = hs.roi.CircleROI(cx=72, cy=72, r_inner=0, r=2) dp.plot_integrated_intensity(roi=roi, cmap='viridis')
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='pre'></a> 2. Pre-processing Apply affine transformation to correct for off axis camera geometry
scale_x = 0.995 scale_y = 1.031 offset_x = 0.631 offset_y = -0.351 dp.apply_affine_transformation(np.array([[scale_x, 0, offset_x], [0, scale_y, offset_y], [0, 0, 1]]))
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Perform difference of gaussian background subtraction with various parameters on one selected diffraction pattern and plot to identify good parameters
from pyxem.utils.expt_utils import investigate_dog_background_removal_interactive dp_test_area = dp.inav[0, 0] gauss_stddev_maxs = np.arange(2, 12, 0.2) # min, max, step gauss_stddev_mins = np.arange(1, 4, 0.2) # min, max, step investigate_dog_background_removal_interactive(dp_test_area, ...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Remove background using difference of gaussians method with parameters identified above
dp = dp.subtract_diffraction_background('difference of gaussians', min_sigma=2, max_sigma=8, lazy_result=False)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Perform further adjustments to the data ranges
dp.data -= dp.data.min() dp.data *= 1 / dp.data.max()
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Set diffraction calibration and scan calibration
dp = pxm.signals.ElectronDiffraction2D(dp) #this is needed because of a bug in the code dp.set_diffraction_calibration(diffraction_calibration) dp.set_scan_calibration(10)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='tem'></a> 3. Pattern Matching Pattern matching generates a database of simulated diffraction patterns and then compares all simulated patterns against each experimental pattern to find the best match Import generators required for simulation and indexation
from diffsims.libraries.structure_library import StructureLibrary from diffsims.generators.diffraction_generator import DiffractionGenerator from diffsims.generators.library_generator import DiffractionLibraryGenerator from diffsims.generators.zap_map_generator import get_rotation_from_z_to_direction from diffsims.gen...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
3.1. Define Library of Structures & Orientations Define the crystal phases to be included in the simulated library
structure_zb = diffpy.structure.loadStructure('./data/02/GaAs_mp-2534_conventional_standard.cif') structure_wz = diffpy.structure.loadStructure('./data/02/GaAs_mp-8883_conventional_standard.cif')
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Create a basic rotations list.
za110c = get_rotation_from_z_to_direction(structure_zb, [1,1,0]) rot_list_cubic = get_grid_around_beam_direction(beam_rotation=za110c, resolution=1, angular_range=(0,180)) za110h = get_rotation_from_z_to_direction(structure_wz, [1,1,0]) rot_list_hex = get_grid_around_beam_direction(beam_rotation=za110h, resolution=1, ...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Construct a StructureLibrary defining crystal structures and orientations for which diffraction will be simulated
struc_lib = StructureLibrary(['ZB','WZ'], [structure_zb,structure_wz], [rot_list_cubic,rot_list_hex])
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='temb'></a> 3.2. Simulate Diffraction for all Structures & Orientations Define a diffsims DiffractionGenerator with diffraction simulation parameters
diff_gen = DiffractionGenerator(accelerating_voltage=accelarating_voltage)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Initialize a diffsims DiffractionLibraryGenerator
lib_gen = DiffractionLibraryGenerator(diff_gen)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Calulate library of diffraction patterns for all phases and unique orientations
target_pattern_dimension_pixels = dp.axes_manager.signal_shape[0] half_size = target_pattern_dimension_pixels // 2 reciprocal_radius = diffraction_calibration*(half_size - 1) diff_lib = lib_gen.get_diffraction_library(struc_lib, calibration=diffraction_calibration, ...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Optionally, save the library for later use.
#diff_lib.pickle_library('./GaAs_cubic_hex.pickle')
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
If saved, the library can be loaded as follows
#from diffsims.libraries.diffraction_library import load_DiffractionLibrary #diff_lib = load_DiffractionLibrary('./GaAs_cubic_hex.pickle', safety=True)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='temb'></a> 3.3. Pattern Matching Indexation Initialize TemplateIndexationGenerator with the experimental data and diffraction library and perform correlation, returning the n_largest matches with highest correlation. <div class="alert alert-block alert-warning"><b>Note:</b> This workflow has been changed from pr...
indexer = TemplateIndexationGenerator(dp, diff_lib) indexation_results = indexer.correlate(n_largest=3)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Check the solutions via a plotting (can be slow, so we don't run by default)
if False: indexation_results.plot_best_matching_results_on_signal(dp, diff_lib)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Get crystallographic map from indexation results
crystal_map = indexation_results.to_crystal_map()
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
crystal_map is now a CrystalMap object, which comes from orix, see their documentation for details. Below we lift their code to plot a phase map
from matplotlib import pyplot as plt from orix import plot fig, ax = plt.subplots(subplot_kw=dict(projection="plot_map")) im = ax.plot_map(crystal_map)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='vec'></a> 4. Vector Matching <div class="alert alert-block alert-danger"><b>Note:</b> This workflow is less well developed than the template matching one, and may well be broken</div> Vector matching generates a database of vector pairs (magnitues and inter-vector angles) and then compares all theoretical value...
from diffsims.generators.library_generator import VectorLibraryGenerator from diffsims.libraries.structure_library import StructureLibrary from diffsims.libraries.vector_library import load_VectorLibrary from pyxem.generators.indexation_generator import VectorIndexationGenerator from pyxem.generators.subpixelrefineme...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='veca'></a> 4.1. Define Library of Structures Define crystal structure for which to determine theoretical vector pairs
structure_zb = diffpy.structure.loadStructure('./data/02/GaAs_mp-2534_conventional_standard.cif') structure_wz = diffpy.structure.loadStructure('./data/02/GaAs_mp-8883_conventional_standard.cif') structure_library = StructureLibrary(['ZB', 'WZ'], [structure_zb, structure_wz], ...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Initialize VectorLibraryGenerator with structures to be considered
vlib_gen = VectorLibraryGenerator(structure_library)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Determine VectorLibrary with all vectors within given reciprocal radius
reciprocal_radius = diffraction_calibration*(half_size - 1)/2 reciprocal_radius vec_lib = vlib_gen.get_vector_library(reciprocal_radius)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Optionally, save the library for later use
#vec_lib.pickle_library('./GaAs_cubic_hex_vectors.pickle') #vec_lib = load_VectorLibrary('./GaAs_cubic_hex_vectors.pickle',safety=True)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
4.2. Find Diffraction Peaks Tune peak finding parameters interactively
dp.find_peaks(interactive=False)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Perform peak finding on the data with parameters from above
peaks = dp.find_peaks(method='difference_of_gaussian', min_sigma=0.005, max_sigma=5.0, sigma_ratio=2.0, threshold=0.06, overlap=0.8, interactive=False)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
coaxing peaks back into a DiffractionVectors
peaks = DiffractionVectors(peaks).T
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
peaks now contain the 2D positions of the diffraction spots on the detector. The vector matching method works in 3D coordinates, which are found by projecting the detector positions back onto the Ewald sphere. Because the methods that follow are slow, we constrain ourselves to looking at a smaller subset of the data
peaks = peaks.inav[:2,:2] peaks.calculate_cartesian_coordinates? peaks.calculate_cartesian_coordinates(accelerating_voltage=accelarating_voltage, camera_length=camera_length)
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
<a id='vecb'></a> 4.3. Vector Matching Indexation Initialize VectorIndexationGenerator with the experimental data and vector library and perform indexation using n_peaks_to_index and returning the n_best indexation results. <div class="alert alert-block alert-danger"><b>Alert: This code no longer works on this example,...
#indexation_generator = VectorIndexationGenerator(peaks, vec_lib) #indexation_results = indexation_generator.index_vectors(mag_tol=3*diffraction_calibration, # angle_tol=4, # degree # index_error_tol=0.2, # ...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Refine all crystal orientations for improved phase reliability and orientation reliability maps.
#refined_results = indexation_generator.refine_n_best_orientations(indexation_results, # accelarating_voltage=accelarating_voltage, # camera_length=camera_length, # ...
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Get crystallographic map from optimized indexation results.
#crystal_map = refined_results.get_crystallographic_map()
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
See the objections documentation for further details
#crystal_map?
doc/demos/02 GaAs Nanowire - Phase Mapping - Orientation Mapping.ipynb
pycrystem/pycrystem
gpl-3.0
Exercise 1 Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$
# Function for setting up the Chebyshev derivative matrix def get_cheby_matrix(nx): cx = np.zeros(nx+1) x = np.zeros(nx+1) for ix in range(0,nx+1): x[ix] = np.cos(np.pi * ix / nx) cx[0] = 2. cx[nx] = 2. cx[1:nx] = 1. D = np.zeros((nx+1,nx+1)) for i in range(0, nx+1): ...
05_pseudospectral/cheby_derivative_solution.ipynb
davofis/computational_seismology
gpl-3.0
Exercise 2 Calculate the numerical derivative by applying the differentiation matrix $D_{ij}$. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the Chebyshev collocation points. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumbe...
# Initialize arbitrary test function on Chebyshev collocation points nx = 200 # Number of grid points x = np.zeros(nx+1) for ix in range(0,nx+1): x[ix] = np.cos(ix * np.pi / nx) dxmin = min(abs(np.diff(x))) dxmax = max(abs(np.diff(x))) # Function example: Gaussian # Width of Gaussian s = .2 # Gaussian functi...
05_pseudospectral/cheby_derivative_solution.ipynb
davofis/computational_seismology
gpl-3.0
Exercise 3 Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error.
# Plot analytical and numerical derivatives # --------------------------------------------------------------- plt.subplot(2,1,1) plt.plot(x, f, "g", lw = 1.5, label='Gaussian') plt.legend(loc='upper right', shadow=True) plt.xlabel('$x$') plt.ylabel('$f(x)$') plt.subplot(2,1,2) plt.plot(x, df_ana, "b", lw = 1....
05_pseudospectral/cheby_derivative_solution.ipynb
davofis/computational_seismology
gpl-3.0
Document Authors Set document authors
# Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
Document Contributors Specify document contributors
# Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
Document Publication Specify document publication status
# Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conse...
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter val...
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # ...
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Othe...
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please speci...
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s).
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccr-iitm/cmip6/models/sandbox-1/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0