markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Exercise 8 8.1. What is the scriptSig from the second input in this tx? 8.2. What is the scriptPubKey and amount of the first output in this tx? 8.3. What is the amount for the second output?```010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830f... | # Exercise 8.1/8.2/8.3
from io import BytesIO
from tx import Tx
hex_transaction = '010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0d... | _____no_output_____ | BSD-2-Clause | session3/session3.ipynb | casey-bowman/pb-exercises |
Sequences of Multi-labelled dataEarlier we have examined the notion that documents can the thought of a sequence of tokens along with a mapping from a set of labels to these tokens. Ideas like stemming and lemmatization are linguistic methods for applying different label mappings to these token sequences. An example... | from vectorizers import TokenCooccurrenceVectorizer
from vectorizers.utils import flatten
import pandas as pd
import numpy as np
import umap
import umap.plot | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
We'll add some bokeh imports for easy interactive plots | import bokeh.io
bokeh.io.output_notebook() | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
Let's fetch some dataThe OpTC data is a bit difficult to pull and parse into easy to process formats. I will leave that as an excercise to the reader. A colleague of ours has pulled this data and restructured it into parquet files distributed across a nice file structure but that is outside of the scope of this note... | flows_onehost_oneday = pd.read_csv("optc_flows_onehost_oneday.csv")
flows_onehost_oneday.shape
flows_onehost_oneday.columns | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
You'll notice that we have just over a million events that are being desribed by a wide variety of descriptive columns. Since we've limited our data to network flow data many of these columns aren't populated for this particular data set. For a more detailed description of this data and these fields I point a reader ... | flow_variables = ['image_path', 'src_ip', 'src_port','dest_ip', 'dest_port', 'l4protocol']
categorical_variables = flow_variables
sort_by = ['timestamp'] | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
Restructure our dataNow we need to restructure this data into a format for easy consumption via our TokenCooccurrenceVectorizer.We will convert each row of our data_frame into a sequence of multi-labelled events. To do that we'll need to convert from a list of categorical column values into a a list of labels. An ea... | flows_sorted = flows_onehost_oneday.sort_values(by = 'timestamp') | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
Now we limit ourselves to the columns of interest for these particular events. | flows_df = flows_sorted[flows_sorted.columns.intersection(categorical_variables)]
flows_df.shape
flows_df.head(3) | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
Now we'll quickly iterate through this dataframe and into our list of lists format. | def categorical_columns_to_list(data_frame, column_names):
"""
Takes a data frame and a set of columns and represents each row a list of the appropriate non-empty columns
of the form column_name:value.
"""
label_list = pd.Series([[f'{k}:{v}' for k, v in zip(column_names, t) if v is not None]
fo... | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
TokenCooccurrenceVectorizerWe initially only embed labels that occur at least 20 times within our days events. This prevents us from attempting to embed labels that we have very limited data for. We will initally select a window_radii=2 in order to include some very limited sequence information. The presumption her... | word_vectorizer = TokenCooccurrenceVectorizer(
min_occurrences= 20,
window_radii=2,
multi_labelled_tokens=True).fit(flow_labels)
word_vectors = word_vectorizer.reduce_dimension()
print(f"This constructs an embedding of {word_vectorizer.cooccurrences_.shape[0]} labels represented by their",
f"cooccurr... | This constructs an embedding of 15014 labels represented by their cooccurrence with 30028 labels occurring before and after them.
We have then reduced this space to a 150 dimensional representation.
| BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
For the purposes of visualization we will use our UMAP algorithm to embed this data into two dimensional space. | model = umap.UMAP(n_neighbors=30, metric='cosine', unique=True, random_state=42).fit(word_vectors)
hover_df = pd.DataFrame({'label':word_vectorizer.token_label_dictionary_.keys()})
event_type = hover_df.label.str.split(':', n=1, expand=True)
event_type.columns = ['type','value']
umap.plot.points(model, theme='fire', la... | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
A little exploration of this space quickly reveals that our label space is overwhelmed by source ports (and some destination ports with values in the range 40,000 to 60,000. A little consultation with subject matter experts quickly reveals that these are so called ephemeral ports. That is a pre-established range of p... | word_vectorizer = TokenCooccurrenceVectorizer(
min_occurrences= 20,
window_radii=2,
excluded_token_regex='(src\_port|dest\_port):[4-6][0-9]{4}',
multi_labelled_tokens=True).fit(flow_labels)
word_vectors = word_vectorizer.reduce_dimension()
print(f"This constructs an embedding of {word_vectorizer.cooccu... | This constructs an embedding of 3245 labels represented by their cooccurrence with 6490 labels occurring before and after them.
We have then reduced this space to a 150 dimensional representation.
| BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
As before we'll reduce this 150 dimensional representation to a two dimensional representation for visualization and exploration. Since we are already using subject matter knowledge to enrich our analysis we will continue in this vein and label our IP addresses with whether they are internal or external addresses. In... | model = umap.UMAP(n_neighbors=30, metric='cosine', unique=True, random_state=42).fit(word_vectors)
hover_df = pd.DataFrame({'label':word_vectorizer.token_label_dictionary_.keys()})
internal_bool = hover_df.label.str.contains("ip:10\.")
event_type = hover_df.label.str.split(':', n=1, expand=True)
event_type.columns = ['... | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
This provides a nice structure over our token label space. We see and can interesting mixtures of internal and external source IP spaces with connections making use of specific source and destination ports seperating off nicely into their own clusters.The next step would be to look at your data by building an interact... | p = umap.plot.interactive(model, theme='fire', labels=event_type['type'], hover_data=hover_df, point_size=3, width=600, height=600)
umap.plot.show(p) | _____no_output_____ | BSD-3-Clause | doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb | jmconroy/vectorizers |
Numpy" NumPy is the fundamental package for scientific computing with Python. It contains among other things:* a powerful N-dimensional array object* sophisticated (broadcasting) functions* useful linear algebra, Fourier transform, and random number capabilities "-- From the [NumPy](http://www.numpy.org/) landing pag... | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
| _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Random numbers in numpy | np.random.random((3, 2)) # Array of shape (3, 2), entries uniform in [0, 1). | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Note that (as usual in computing) numpy produces pseudo-random numbers based on a seed, or more precisely a random state. In order to make random sequences and calculations based on reproducible, use* the [`np.random.seed()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.seed.html) function to set t... | np.random.seed(0)
print(np.random.random(2))
# Reset the global random state to the same state.
np.random.seed(0)
print(np.random.random(2)) | [0.5488135 0.71518937]
[0.5488135 0.71518937]
| MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Numpy Array Operations 1There are a large number of operations you can run on any numpy array. Here we showcase some common ones. | # Create one from hard-coded data:
ar = np.array([
[0.0, 0.2],
[0.9, 0.5],
[0.3, 0.7],
], dtype=np.float64) # float64 is the default.
print('The array:\n', ar)
print()
print('data type', ar.dtype)
print('transpose\n', ar.T)
print('shape', ar.shape)
print('reshaping an array', ar.reshape((6)))
| The array:
[[ 0. 0.2]
[ 0.9 0.5]
[ 0.3 0.7]]
data type float64
transpose
[[ 0. 0.9 0.3]
[ 0.2 0.5 0.7]]
shape (3, 2)
reshaping an array [ 0. 0.2 0.9 0.5 0.3 0.7]
| MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Many numpy operations are available both as np module functions as well as array methods. For example, we can also reshape as | print('reshape v2', np.reshape(ar, (6, 1))) | reshape v2 [[ 0. ]
[ 0.2]
[ 0.9]
[ 0.5]
[ 0.3]
[ 0.7]]
| MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Numpy Indexing and selectorsHere are some basic indexing examples from numpy. | ar
ar[0, 1] # row, column
ar[:, 1] # slices: select all elements across the first (0th) axis.
ar[1:2, 1] # slices with syntax from:to, selecting [from, to).
ar[1:, 1] # Omit `to` to go all the way to the end
ar[:2, 1] # Omit `from` to start from the beginning
ar[0:-1, 1] # Use negative indexing to count elements ... | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
We can also pass boolean arrays as indices. These will exactly define which elements to select. | ar[np.array([
[True, False],
[False, True],
[True, False],
])] | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Boolean arrays can be created with logical operations, then used as selectors. Logical operators apply elementwise. | ar_2 = np.array([ # Nearly the same as ar
[0.0, 0.1],
[0.9, 0.5],
[0.0, 0.7],
])
# Where ar_2 is smaller than ar, let ar_2 be -inf.
ar_2[ar_2 < ar] = -np.inf
ar_2 | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Numpy Operations 2 | print('array:\n', ar)
print()
print('sum across axis 0 (rows):', ar.sum(axis=0))
print('mean', ar.mean())
print('min', ar.min())
print('row-wise min', ar.min(axis=1))
| array:
[[ 0. 0.2]
[ 0.9 0.5]
[ 0.3 0.7]]
sum across axis 0 (rows): [ 1.2 1.4]
mean 0.433333333333
min 0.0
row-wise min [ 0. 0.5 0.3]
| MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
We can also take element-wise minimums between two arrays.We may want to do this when "clipping" values in a matrix, that is, setting any values larger than, say, 0.6, to 0.6. We would do this in numpy with.. Broadcasting (and selectors) | np.minimum(ar, 0.6) | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Numpy automatically turns the scalar 0.6 into an array the same size as `ar` in order to take element-wise minimum. Broadcasting can save us a lot of typing, but in complicated cases it may require a good understanding of the exact rules followed.Some references:* [Numpy page that explains broadcasting](https://docs.sc... | # Centering our array.
print('centered array:\n', ar - np.mean(ar)) | centered array:
[[-0.43333333 -0.23333333]
[ 0.46666667 0.06666667]
[-0.13333333 0.26666667]]
| MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Note that `np.mean()` was a scalar, but it is automatically subtracted from every element. We can write the minimum function ourselves, as well. | clipped_ar = ar.copy() # So that ar is not modified.
clipped_ar[clipped_ar > 0.6] = 0.6
clipped_ar | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
A few things happened here:1. 0.6 was broadcast in for the greater than (>) operation2. The greater than operation defined a selector, selecting a subset of the elements of the array3. 0.6 was broadcast to the right number of elements for assignment. Vectors may also be broadcast into matrices. | vec = np.array([1, 2])
ar + vec | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Here the shapes of the involved arrays are:```ar (2d array): 2 x 2vec (1d array): 2Result (2d array): 2 x 2```When either of the dimensions compared is one (even implicitly, like in the case of `vec`), the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other.H... | #ar + np.array([[1, 2, 3]]) | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
ExerciseBroadcast and add the vector `[10, 20, 30]` across the columns of `ar`. You should get ```array([[10. , 10.2], [20.9, 20.5], [30.3, 30.7]]) ``` | #@title Code
# Recall that you can use vec.shape to verify that your array has the
# shape you expect.
### Your code here ###
#@title Solution
vec = np.array([[10], [20], [30]])
ar + vec | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
`np.newaxis`We can use another numpy feature, `np.newaxis` to simply form the column vector that was required for the example above. It adds a singleton dimension to arrays at the desired location: | vec = np.array([1, 2])
vec.shape
vec[np.newaxis, :].shape
vec[:, np.newaxis].shape | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
Now you know more than enough to generate some example data for our `NXOR` function. Exercise: Generate Data for NXORWrite a function `get_data(num_examples)` that returns two numpy arrays* `inputs` of shape `num_examples x 2` with points selected uniformly from the $[-1, 1]^2$ domain.* `labels` of shape `num_examples... | #@title Code
def get_data(num_examples):
# Replace with your code.
return np.zeros((num_examples, 2)), np.zeros((num_examples))
#@title Solution
# Solution 1.
def get_data(num_examples):
inputs = 2*np.random.random((num_examples, 2)) - 1
labels = np.prod(inputs, axis=1)
labels[labels <= 0] = -1
labels[l... | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions |
That's all, folks!For now. | _____no_output_____ | MIT | introductory/Intro_Numpy.ipynb | lmartak/PracticalSessions | |
Stack Overflow Developer Surveys, 2015-2019 | # global printing options
pd.options.display.max_columns = 100
pd.options.display.max_rows = 30 | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
Questions explored:1. Which are the current most commonly used programming languages?2. How has the prevalance of different programming languages changed throughout the past **five????** years?3. Which programming languages are the currently the most popular for specific types of developers?Poss questions:- mode of edu... | # import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import timeit
%matplotlib inline
df2019 = pd.read_csv('./2019survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False)
df2018 = pd.read_csv('./2018survey_r... | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
--- 2015 Dataset--- | #slicing the desired columns about Current Lang & Tech from the rest of the 2015 df
#modify new df column names to match list
df2015_mix = df2015.loc[:,'Current Lang & Tech: Android':'Current Lang & Tech: Write-In']
df2015_mix.columns = df2015_mix.columns.str.replace('Current Lang & Tech: ', '')
#df2015_mix.columns = d... | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
--- 2016 Dataset--- | def process_2016_pt1(df, col):
df_droppedna, df_droppedna_len, df_droppedna_split, df_count, df_vals = process_col(df, col)
df_new, df_new_transposed = s_of_lists_to_df(df_droppedna_split)
return df_new, df_new_transposed, df_vals, df_droppedna_len
dftech2016_new, dftech2016_new_tp, tech2016_vals, tech... | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
--- 2017 Dataset--- | df_droppedna, df_droppedna_split, df_count, df_vals = process_col(df, col)
def process_data_extended(df, col):
'''
for years 2017-2019.
processes a specified column from the raw imported dataframe
'''
s = df[col]
s = s.dropna()
df_len = s.shape[0]
df_count, df_vals = eval_complex_col(df,... | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
--- 2018 Dataset--- | dflang2018_bool, lang2018_vals, lang2018_len = process_data(df2018, 'LanguageWorkedWith')
lang2018_vals
dflang2018_bool['Visual Basic / VBA'] = (dflang2018_bool['VB.NET'] |
dflang2018_bool['VBA'] |
dflang2018_bool['Visual Basic 6'])
dflang2018_bool = dflang2018_bool.drop(['VB.NET', 'VBA', 'Visual Basic 6'], axis = ... | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
--- 2019 Dataset--- | dflang2019_bool, lang2019_vals, lang2019_len = process_data(df2019, 'LanguageWorkedWith')
lang2019_vals
dflang2019_bool = dflang2019_bool.rename(columns = {"VBA": "Visual Basic / VBA", "Other(s):": "Other"})
other_lang2019_list = find_other_lang(dflang2019_bool)
other_lang2019_list = sorted(other_lang2019_list)
other_l... | _____no_output_____ | CNRI-Python | .ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb | Serenitea/CRISP_DM-StackOverflow-Survey |
RadarCOVID-Report Data Extraction | import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.g... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Constants | from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Parameters | environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download ... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
COVID-19 Cases | report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/mast... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Extract API TEKs | raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_err... | /opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-c... | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Dump API TEKs | tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sa... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Load TEK Dumps | import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_pat... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Daily New TEKs | tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.c... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Hourly New TEKs | hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_t... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Official Statistics | import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_st... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Data Merge | result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_su... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Report Results | display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Bac... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Daily Summary Table | result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Daily Summary Plots | result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"... | /opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
layout[ax.rowNum, ax.colNum] =... | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Daily Generation to Upload Period Table | display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mappin... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Hourly Summary Plots | hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_fi... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Publish Results | github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 el... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Save Results | report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Ta... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Publish Results as JSON | def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="re... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Publish on README | with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Publish on Twitter | enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
t... | _____no_output_____ | Apache-2.0 | Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb | pvieito/Radar-STATS |
Exercise 1.10I have a fair coin and a two-headed coin. I choose one of the two coins randomly with equal probability and flip it. Given that the flip was heads, what is the probability that I flipped the two-headed coin?**Solution:**Let $F$ denote picking the fair coin and $T$ picking the two-headed coin, respectively... | num_samples = 100000
chosen_coin = np.random.randint(low=0, high=2, size=num_samples) # 0 = fair, 1 = two-headed
heads = np.random.randint(low=0, high=2, size=num_samples) + chosen_coin > 0
(chosen_coin * heads).sum() / heads.sum() | _____no_output_____ | MIT | chapter01/exercise10.ipynb | soerenberg/probability-and-computing-exercises |
Project: Tweets Data Analysis Table of ContentsIntroduction Data Wrangling Data Gathering Data Assessing Data Cleaning Exploratory Data AnalysisConclusions Introduction> wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is gr... | # import the packages will be used through the project
import numpy as np
import pandas as pd
# for twitter API
import tweepy
from tweepy import OAuthHandler
import json
from timeit import default_timer as timer
import requests
import tweepy
import json
import os
import re
# for Exploratory Data Analysis visually
i... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Data Wrangling 1- Gathering Data (A) gathering twitter archivement dog rates Data from the provided csv file | # Load your data and print out a few lines. Perform operations to inspect data
# types and look for instances of missing or possibly errant data.
twitter_archive = pd.read_csv('twitter-archive-enhanced.csv')
twitter_archive.head(5) | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
(B) Geting data from file (image_predictions.tsv) which is hosted on Udacity's servers and should be downloaded programmatically using the Requests library a | # Scrape the image predictions file from the Udacity website
url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv'
response = requests.get(url)
with open(os.path.join('image_predictions.tsv'), mode = 'wb') as file:
file.write(response.content)
# Load the ... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
(C) Getting data from twitter API | # Query Twitter API for each tweet in the Twitter archive and save JSON in a text file
# These are hidden to comply with Twitter's API terms and conditions
consumer_key = 'JRJCYqpq8QnnQde8W60rPUwwb'
consumer_secret = 'bysFJFrg0sjpWXmMV4EmePkOxLOPvmIgcbB3v0ZrwxrqhTD3bf'
access_token = '307362468-CwCujZZ0OaFQ3Ut2xf4dNlE... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
2- Data Assessing (A) viusal Assessing | # Display the twitter_archive table
twitter_archive.head()
twitter_archive
twitter_archive[twitter_archive['expanded_urls'].isnull() == False]
twitter_archive['text'][1]
twitter_archive['rating_denominator'].value_counts()
twitter_archive.nunique()
twitter_archive.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 2356 entries, 0 to 2355
Data columns (total 17 columns):
tweet_id 2356 non-null int64
in_reply_to_status_id 78 non-null float64
in_reply_to_user_id 78 non-null float64
timestamp 2356 non-null object
source ... | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
The columns of twitter_archive dataframe > * *tweet_id* => the unique identifier for each tweet.> * *in_reply_to_status_id* => the id of replay tweet> * *in_reply_to_user_id* => the id of replay user > * *timestamp* => the tweet post time > * *source* => the url of the twitter ... | images.head()
images.tail()
images.nunique() | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
The columns of images dataframe and i'll use in my analysis > * tweet_id ==> tweet_id > * jpg_url ==> image link > * p1 the ==> probiltiy of a certen bread > * p1_conf ==> the probility of being this bread > * p1_dog the ==> if the value is true or false | # Display the tweets_data table
tweets_data.head()
# Display the tweets_data table
tweets_data.tail()
tweets_data.sample(5) | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
The columns of tweets_data dataframe > * tweet_id ==> the unique identifier for each tweet. > * retweet_num ==> image link > * favorite_num ==> probiltiy of a certen bread > * followers_num ==> the probility of being this bread---- (B) programming Assessing | twitter_archive.info()
twitter_archive.isnull().sum()
twitter_archive.name.value_counts()
twitter_archive.isnull().sum().sum()
twitter_archive.describe()
twitter_archive.sample(5)
twitter_archive.sample(5)
twitter_archive.rating_denominator.value_counts()
twitter_archive[twitter_archive['rating_denominator'] == 110]
im... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Quality issues: Twitter archive (`twitter_archive`) table* `tweet_id` data type is an int not a string* `timestamp`, `retweeted_status_timestamp` are a string not datatime* `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, retweeted_status_user_id they have alot of missing value as well as there d... | # making a copy to work one
archive_clean = twitter_archive.copy()
images_clean = images.copy()
tweets_clean = tweets_data.copy() | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Definechaning the rate data type to be float | archive_clean.rating_numerator = archive_clean.rating_numerator.astype(float,copy=False)
# test
archive_clean.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 2356 entries, 0 to 2355
Data columns (total 17 columns):
tweet_id 2356 non-null int64
in_reply_to_status_id 78 non-null float64
in_reply_to_user_id 78 non-null float64
timestamp 2356 non-null object
source ... | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Define fixing the data in `rating_numerator` column as in row `46` it's value should be 13.5 but it's only 5 in the data | # for avoiding "This pattern has match groups" error from ipython notebook
import warnings
warnings.filterwarnings("ignore", 'This pattern has match groups')
# diplaying the rows that has the problem
archive_clean[archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")][['text', 'rating_numerator']]
# storing the index of ... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Define There are retweets should be removed | tweets_clean.drop('retweet_count',axis=1,inplace=True)
tweets_clean.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 2339 entries, 0 to 2338
Data columns (total 3 columns):
tweet_id 2339 non-null object
favorite_count 2339 non-null int64
followers_count 2339 non-null int64
dtypes: int64(2), object(1)
memory usage: 54.9+ KB
| MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
define: removing the un unnecessary columns for my analysis | # drop the column form archive_clean table
archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp','expanded_urls','text'],axis=1,inplace=True)
# test
archive_clean.info()
# drop the column form archive_clean table
images.info()
| <class 'pandas.core.frame.DataFrame'>
RangeIndex: 2075 entries, 0 to 2074
Data columns (total 12 columns):
tweet_id 2075 non-null int64
jpg_url 2075 non-null object
img_num 2075 non-null int64
p1 2075 non-null object
p1_conf 2075 non-null float64
p1_dog 2075 non-null bool
p2 2075 n... | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Define:the source column has 3 urls and it will be nicer and cleaner to make a word for each segmentation. | archive_clean.source.value_counts()
url_1 = '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>'
url_2 = '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>'
url_3 = '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>'
url_4 = '<a href="http:... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
define:fix data types of the ids to make it easy to merge the tableshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html | # Convert tweet_id to str for the tables
archive_clean.tweet_id = archive_clean.tweet_id.astype(str,copy=False)
images_clean.tweet_id = images_clean.tweet_id.astype(str,copy=False)
archive_clean.tweet_id = archive_clean.tweet_id.astype(str,copy=False)
archive_clean.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 2356 entries, 0 to 2355
Data columns (total 10 columns):
tweet_id 2356 non-null object
timestamp 2356 non-null object
source 2356 non-null object
rating_numerator 2356 non-null float64
rating_denominator 2356 non-null int6... | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Define:some nomert Define:fix the to make the archive_clean timestamp datatype to be the datetime | # convert timestamp to datetime data type
archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp)
| _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
define: fixing the name column in twitter_clean as there some name is just on the small letter, so I'll replace them by an empty string. | archive_clean.name
#replace names lowercase letters with ''
archive_clean.name = archive_clean.name.str.replace('(^[a-z]*)', '')
#replace '' letters with 'None'
archive_clean.name = archive_clean.name.replace('', 'None')
# test
archive_clean.name.value_counts()
#test for the letters
archive_clean.query('name == "(^[a... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
define tweets_clean data have nulls and we have to remove themso we'll drop all the nulls from or dataset | tweets_clean.isnull().sum()
tweets_data.isnull().sum()
tweets_clean.dropna(axis=0, inplace=True)
tweets_clean.isnull().sum() | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Define: in tweets_clean (from the API) we need to change the data type of favorite_count and followers_count to be int | tweets_clean.info()
tweets_clean.favorite_count = tweets_clean.favorite_count.astype(int,copy=False)
tweets_clean.followers_count = tweets_clean.followers_count.astype(int,copy=False)
#test
tweets_clean.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 2339 entries, 0 to 2338
Data columns (total 3 columns):
tweet_id 2339 non-null object
favorite_count 2339 non-null int32
followers_count 2339 non-null int32
dtypes: int32(2), object(1)
memory usage: 54.8+ KB
| MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
define:* column p1 ,p2, p3 have names start with lowercase and uppercase so we have to make everything lower case | images_clean['p1'] = images_clean['p1'].str.lower()
images_clean['p2'] = images_clean['p2'].str.lower()
images_clean['p3'] = images_clean['p3'].str.lower()
#test
images_clean.head()
tweets_clean.head()
archive_clean.head() | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Definerename the jpg_url to img_link | images_clean.head()
images_clean.rename(columns={'jpg_url':'img_link'},inplace=True)
#test
images_clean.head() | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
(2) Tidy Define:1- making the rating_numerator and rating_denominator to one rating column in archive_clean then remove the two columns | # making and adding the column to the archive_clean dataset
archive_clean['rating'] = archive_clean['rating_numerator']/archive_clean['rating_denominator']
#test
archive_clean.head()
# drop the rating_numerator and rating_denominator columns
archive_clean.drop(['rating_numerator','rating_denominator'],axis=1, inplace... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Define:2- making the doggo, floofer,pupper and puppo to one dog_stage column in archive_clean then remove the two columns | #1- replace to all the null value in the column
def remove_None(df, col_name,value):
# take the df name and col_name and return the col with no None word
ind = df[df[col_name] == value][col_name].index
df.loc[ind, col_name] = ''
return df.head()
# replace to all the None value in the column
remove_None... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
now the `archive_clean` ready to join the other tables define 3- in the images_clean dataset i have picked from the 3 Ps one accourding to the highest confedent | images_clean.head()
#define dog_breed function to separate out the 3 columns of breed into one with highest confidence ratio
def get_p(r):
max_num = max(r.p1_conf ,r.p2_conf, r.p3_conf)
if r.p1_conf == max_num:
return r.p1
elif r.p2_conf == max_num:
return r.p2
elif r.p3_conf == max_num... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
now the data is clean so we ready to marge them to on column | #merge the two tables
twitter_archive_master = pd.merge(left=archive_clean, right=images_clean, how='inner', on='tweet_id')
twitter_archive_master = pd.merge(left=twitter_archive_master, right=tweets_clean, how='inner', on='tweet_id')
twitter_archive_master.info()
twitter_archive_master.head() | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
saving the clean data | # saving the data fram to csv file
twitter_archive_master.to_csv('twitter_archive_master.csv', index=False)
# saving the data fram to sqlite file (data base)
df = twitter_archive_master | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Exploratory Data Analysis Research Question 1 : what's the most popular dog_stage ? | counts = df['dog_stage'].value_counts()[1:]
uni = counts.index
counts
# gentarting a list of the loc or the index for each to be replaced by the tick
locs = np.arange(len(uni))
plt.bar(locs, counts)
plt.xlabel('Dog stage', fontsize=14)
plt.ylabel('The number of tweets', fontsize=14)
# Set text labels:
plt.xticks(loc... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
by ignoring the number of unknowns, from our data we can see that:> The greatest number of tweets about the dogs are in pupper dogs with 1055 tweets. >> the "doggo"dogs has 335 tweets.>> 115 tweets for puppo dogs.>> 35 tweets for floofer dogs and it's the lowest number of tweets. Research Question 2 : what's the mos... | rating = df['rating'].groupby(df['dog_stage']).mean()[:-1].sort_values(ascending=False)
rating
#polting the values in barchat
dog_stage = rating.index
plt.bar(dog_stage, rating)
plt.xlabel('Dog breed', fontsize=14)
plt.ylabel('averge rating', fontsize=14)
# Set text labels:
plt.title('the number of average rate for ea... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
> from the bar chat we can see that the floofer dogs tweets 1.2 average rating and the puppo tweets have avrage rating 1.197> while the doggo tweets have 1.197 rate. > and the pupper tweets have 1.068 rate. >> Research Question 3 what are the top 10 bead that has the most number of tweets ? | top_breads = df['breed'].value_counts()[:10]
topbreds_uni = top_breads.index
top_breads
# gentarting a list of the loc or the index for each breed to be replaced by the tick
locs = np.arange(len(topbreds_uni))
plt.bar(locs, top_breads)
plt.xlabel('Dog breed', fontsize=14)
plt.ylabel('The number of tweets', fontsize=14... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
The top 10 bead that has the most number of tweets are golden_retriever with 750 tweets, labrador_retriever with 495 tweets, pembroke 440 tweets , chihuahua for 405 tweets,285 tweets about pug dogs,220 about chow dogs,210 about samoyed dogs,195 about the toy poodle dogs,190 tweets about pomeranian dogs,150 tweets about... | df['rating'].groupby(df['source']).mean().sort_values(ascending=False)
retweets = df['rating'].groupby(df['source']).mean().sort_values(ascending=False)
source = retweets.index
plt.bar(source, retweets)
plt.xlabel('scource', fontsize=14)
plt.ylabel('rating average', fontsize=14)
plt.title('the number of rating average... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
>the average rate source is twitter for iphone with 18.77(about 19) >>then twitter web with 1.008 average rate>> 1.006 average rate from tweet deck Research Question 5 what are the top 4 images that has the most favorite_counts ? | images = df['favorite_count'].groupby(df['img_link']).sum().sort_values(ascending=False).iloc[:4]
image_lbl = []
for i in range(len(images)):
x = df[df['img_link'] == images.index[i]]['breed'].iloc[0]
image_lbl.append(x)
dog_stage= []
for i in range(len(images)):
x = df[df['img_link'] == images.index[i]]['d... | _____no_output_____ | MIT | Wrangle-act.ipynb | AbdElrhman-m/Wrangle-and-Analyze-Data |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.