Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Give up after 6 attempts. The last one raised <class 'huggingface_hub.utils._errors.HfHubHTTPError'>
Error code: UnexpectedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id int64 | prompt string | repo string | completion string |
|---|---|---|---|
0 | # complete multi-line body codes for that function<fim_prefix>#
'''
Train a k - nearest neighbors classifier for face recognition.
Trains a k-nearest neighbors classifier for face recognition.
:param train_dir: directory that contains a sub-directory for each known person, with its name.
(View in source code... | ageitgey/face_recognition | X = []
y = []
for class_dir in os.listdir(train_dir):
if not os.path.isdir(os.path.join(train_dir, class_dir)):
continue
for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
image = face_recognition.load_image_file(img_path)
face_bounding_boxes = face_recognition.fa... |
1 | # complete multi-line body codes for that function<fim_prefix>#
'''
Predicts faces in a given image using a KNN classifier.
Recognizes faces in given image using a trained KNN classifier
:param X_img_path: path to image to be recognized
:param knn_clf: (optional) a knn classifier object. if not specified, mode... | ageitgey/face_recognition | if not os.path.isfile(X_img_path) or os.path.splitext(X_img_path)[1][1:] not in ALLOWED_EXTENSIONS:
raise Exception('Invalid image path: {}'.format(X_img_path))
if knn_clf is None and model_path is None:
raise Exception('Must supply knn classifier either thourgh knn_clf or model_path')
if knn_clf is None:
w... |
2 | # complete multi-line body codes for that function<fim_prefix>#
'''
Show the face recognition results visually.
Shows the face recognition results visually.
:param img_path: path to image to be recognized
:param predictions: results of the predict function
:return:
'''
def show_prediction_labels_on_image(i... | ageitgey/face_recognition | pil_image = Image.open(img_path).convert('RGB')
draw = ImageDraw.Draw(pil_image)
for (name, (top, right, bottom, left)) in predictions:
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))
name = name.encode('UTF-8')
(text_width, text_height) = draw.textsize(name)
draw.rectangle(((left, b... |
3 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert a dlib rect object to a plain tuple in ( top right bottom left
Convert a dlib 'rect' object to a plain tuple in (top, right, bottom, left) order
:param rect: a dlib 'rect' object
:return: a plain tuple representation of the rect in (to... | ageitgey/face_recognition | return (rect.top(), rect.right(), rect.bottom(), rect.left()) |
4 | # complete multi-line body codes for that function<fim_prefix>#
'''
Trim the given CSS tuple to the bounds of the image.
Make sure a tuple in (top, right, bottom, left) order is within the bounds of the image.
:param css: plain tuple representation of the rect in (top, right, bottom, left) order
:param image_... | ageitgey/face_recognition | return (max(css[0], 0), min(css[1], image_shape[1]), min(css[2], image_shape[0]), max(css[3], 0)) |
5 | # complete multi-line body codes for that function<fim_prefix>#
'''
Given a list of face encodings compare them to a known face encoding and get a euclidean distance for each comparison face.
Given a list of face encodings, compare them to a known face encoding and get a euclidean distance
for each comparison face.... | ageitgey/face_recognition | if len(face_encodings) == 0:
return np.empty(0)
return np.linalg.norm(face_encodings - face_to_compare, axis=1) |
6 | # complete multi-line body codes for that function<fim_prefix>#
'''
Loads an image file into a numpy array.
Loads an image file (.jpg, .png, etc) into a numpy array
:param file: image file name or file object to load
:param mode: format to convert the image to. Only 'RGB' (8-bit RGB, 3 channels) and 'L' (black... | ageitgey/face_recognition | im = PIL.Image.open(file)
if mode:
im = im.convert(mode)
return np.array(im) |
7 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns an array of bounding boxes of human faces in a image.
Returns an array of bounding boxes of human faces in a image
:param img: An image (as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking... | ageitgey/face_recognition | if model == 'cnn':
return cnn_face_detector(img, number_of_times_to_upsample)
else:
return face_detector(img, number_of_times_to_upsample) |
8 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns an array of bounding boxes of human faces in a image.
Returns an array of bounding boxes of human faces in a image
:param img: An image (as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking... | ageitgey/face_recognition | if model == 'cnn':
return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, 'cnn')]
else:
return [_trim_css_to_bounds(_rect_to_css(face), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, model)] |
9 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns a 2d array of bounding boxes of human faces in a given image using the cnn face detectors.
Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector
If you are using a GPU, this can give you much faster res... | ageitgey/face_recognition | def convert_cnn_detections_to_css(detections):
return [_trim_css_to_bounds(_rect_to_css(face.rect), images[0].shape) for face in detections]
raw_detections_batched = _raw_face_locations_batched(images, number_of_times_to_upsample, batch_size)
return list(map(convert_cnn_detections_to_css, raw_detections_batched)) |
10 | # complete multi-line body codes for that function<fim_prefix>#
'''
Given an image returns a dict of face feature locations
Given an image, returns a dict of face feature locations (eyes, nose, etc) for each face in the image
:param face_image: image to search
:param face_locations: Optionally provide a list o... | ageitgey/face_recognition | landmarks = _raw_face_landmarks(face_image, face_locations, model)
landmarks_as_tuples = [[(p.x, p.y) for p in landmark.parts()] for landmark in landmarks]
if model == 'large':
return [{'chin': points[0:17], 'left_eyebrow': points[17:22], 'right_eyebrow': points[22:27], 'nose_bridge': points[27:31], 'nose_tip': poi... |
11 | # complete multi-line body codes for that function<fim_prefix>#
'''
Given an image returns the 128 - dimensional face encoding for each face in the image.
Given an image, return the 128-dimension face encoding for each face in the image.
:param face_image: The image that contains one or more faces
:param known... | ageitgey/face_recognition | raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model='small')
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks] |
12 | # complete multi-line body codes for that function<fim_prefix>#
'''
Parses a string into a base - level structure type.
Parses the given data type string to a :class:`DataType`. The data type string format equals
to :class:`DataType.simpleString`, except that top level struct type can omit
the ``struct<>`` and ... | apache/spark | sc = SparkContext._active_spark_context
def from_ddl_schema(type_str):
return _parse_datatype_json_string(sc._jvm.org.apache.spark.sql.types.StructType.fromDDL(type_str).json())
def from_ddl_datatype(type_str):
return _parse_datatype_json_string(sc._jvm.org.apache.spark.sql.api.python.PythonSQLUtils.parseData... |
13 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the Catalyst datatype from the size of integers.
Return the Catalyst datatype from the size of integers.
'''
def _int_size_to_type(size):<fim_suffix><fim_middle> | apache/spark | if size <= 8:
return ByteType
if size <= 16:
return ShortType
if size <= 32:
return IntegerType
if size <= 64:
return LongType |
14 | # complete multi-line body codes for that function<fim_prefix>#
'''
Infer the DataType from obj
Infer the DataType from obj
'''
def _infer_type(obj):<fim_suffix><fim_middle> | apache/spark | if obj is None:
return NullType()
if hasattr(obj, '__UDT__'):
return obj.__UDT__
dataType = _type_mappings.get(type(obj))
if dataType is DecimalType:
return DecimalType(38, 18)
elif dataType is not None:
return dataType()
if isinstance(obj, dict):
for (key, value) in obj.items():
if key is n... |
15 | # complete multi-line body codes for that function<fim_prefix>#
'''
Infer the schema from dict namedtuple or object
Infer the schema from dict/namedtuple/object
'''
def _infer_schema(row, names=None):<fim_suffix><fim_middle> | apache/spark | if isinstance(row, dict):
items = sorted(row.items())
elif isinstance(row, (tuple, list)):
if hasattr(row, '__fields__'):
items = zip(row.__fields__, tuple(row))
elif hasattr(row, '_fields'):
items = zip(row._fields, tuple(row))
else:
if names is None:
names = ['_%d' ... |
16 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return whether there is NullType in dt
Return whether there is NullType in `dt` or not
'''
def _has_nulltype(dt):<fim_suffix><fim_middle> | apache/spark | if isinstance(dt, StructType):
return any((_has_nulltype(f.dataType) for f in dt.fields))
elif isinstance(dt, ArrayType):
return _has_nulltype(dt.elementType)
elif isinstance(dt, MapType):
return _has_nulltype(dt.keyType) or _has_nulltype(dt.valueType)
else:
return isinstance(dt, NullType) |
17 | # complete multi-line body codes for that function<fim_prefix>#
'''
Create a converter to drop the names of fields in obj
Create a converter to drop the names of fields in obj
'''
def _create_converter(dataType):<fim_suffix><fim_middle> | apache/spark | if not _need_converter(dataType):
return lambda x: x
if isinstance(dataType, ArrayType):
conv = _create_converter(dataType.elementType)
return lambda row: [conv(v) for v in row]
elif isinstance(dataType, MapType):
kconv = _create_converter(dataType.keyType)
vconv = _create_converter(dataType.valueTy... |
18 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns a verifier that checks the type of obj against dataType and raises a TypeError if they do not match.
Make a verifier that checks the type of obj against dataType and raises a TypeError if they do
not match.
This verifier also checks th... | apache/spark | if name is None:
new_msg = lambda msg: msg
new_name = lambda n: 'field %s' % n
else:
new_msg = lambda msg: '%s: %s' % (name, msg)
new_name = lambda n: 'field %s in %s' % (n, name)
def verify_nullability(obj):
if obj is None:
if nullable:
return True
else:
rai... |
19 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert Spark data type to Arrow type
Convert Spark data type to pyarrow type
'''
def to_arrow_type(dt):<fim_suffix><fim_middle> | apache/spark | import pyarrow as pa
if type(dt) == BooleanType:
arrow_type = pa.bool_()
elif type(dt) == ByteType:
arrow_type = pa.int8()
elif type(dt) == ShortType:
arrow_type = pa.int16()
elif type(dt) == IntegerType:
arrow_type = pa.int32()
elif type(dt) == LongType:
arrow_type = pa.int64()
elif type(dt) == Flo... |
20 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert a Spark schema from Spark to Arrow
Convert a schema from Spark to Arrow
'''
def to_arrow_schema(schema):<fim_suffix><fim_middle> | apache/spark | import pyarrow as pa
fields = [pa.field(field.name, to_arrow_type(field.dataType), nullable=field.nullable) for field in schema]
return pa.schema(fields) |
21 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert a pyarrow type to Spark data type.
Convert pyarrow type to Spark data type.
'''
def from_arrow_type(at):<fim_suffix><fim_middle> | apache/spark | import pyarrow.types as types
if types.is_boolean(at):
spark_type = BooleanType()
elif types.is_int8(at):
spark_type = ByteType()
elif types.is_int16(at):
spark_type = ShortType()
elif types.is_int32(at):
spark_type = IntegerType()
elif types.is_int64(at):
spark_type = LongType()
elif types.is_float... |
22 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert schema from Arrow to Spark.
Convert schema from Arrow to Spark.
'''
def from_arrow_schema(arrow_schema):<fim_suffix><fim_middle> | apache/spark | return StructType([StructField(field.name, from_arrow_type(field.type), nullable=field.nullable) for field in arrow_schema]) |
23 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert timezone aware timestamps to timezone - naive in the specified timezone or local timezone.
Convert timezone aware timestamps to timezone-naive in the specified timezone or local timezone.
If the input series is not a timestamp series, then... | apache/spark | from pyspark.sql.utils import require_minimum_pandas_version
require_minimum_pandas_version()
from pandas.api.types import is_datetime64tz_dtype
tz = timezone or _get_local_timezone()
if is_datetime64tz_dtype(s.dtype):
return s.dt.tz_convert(tz).dt.tz_localize(None)
else:
return s |
24 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert timezone aware timestamps to timezone - naive in the specified timezone or local timezone - naive in the specified timezone or local timezone - naive in the specified timezone.
Convert timezone aware timestamps to timezone-naive in the specifie... | apache/spark | from pyspark.sql.utils import require_minimum_pandas_version
require_minimum_pandas_version()
for (column, series) in pdf.iteritems():
pdf[column] = _check_series_localize_timestamps(series, timezone)
return pdf |
25 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert a tz - naive timestamp in the specified timezone or local timezone to UTC normalized for Spark internal storage.
Convert a tz-naive timestamp in the specified timezone or local timezone to UTC normalized for
Spark internal storage
:par... | apache/spark | from pyspark.sql.utils import require_minimum_pandas_version
require_minimum_pandas_version()
from pandas.api.types import is_datetime64_dtype, is_datetime64tz_dtype
if is_datetime64_dtype(s.dtype):
tz = timezone or _get_local_timezone()
return s.dt.tz_localize(tz, ambiguous=False).dt.tz_convert('UTC')
elif is_... |
26 | # complete multi-line body codes for that function<fim_prefix>#
'''
Convert timestamp to timezone - naive in the specified timezone or local timezone.
Convert timestamp to timezone-naive in the specified timezone or local timezone
:param s: a pandas.Series
:param from_timezone: the timezone to convert from. if... | apache/spark | from pyspark.sql.utils import require_minimum_pandas_version
require_minimum_pandas_version()
import pandas as pd
from pandas.api.types import is_datetime64tz_dtype, is_datetime64_dtype
from_tz = from_timezone or _get_local_timezone()
to_tz = to_timezone or _get_local_timezone()
if is_datetime64tz_dtype(s.dtype):
r... |
27 | # complete multi-line body codes for that function<fim_prefix>#
'''
Constructs a new StructType object by adding new elements to the list of fields.
Construct a StructType by adding new elements to it to define the schema. The method accepts
either:
a) A single parameter which is a StructField obje... | apache/spark | if isinstance(field, StructField):
self.fields.append(field)
self.names.append(field.name)
else:
if isinstance(field, str) and data_type is None:
raise ValueError('Must specify DataType if passing name of struct_field to create.')
if isinstance(data_type, str):
data_type_f = _parse_datat... |
28 | # complete multi-line body codes for that function<fim_prefix>#
'''
Cache the sqlType into class because it s heavy used in toInternal.
Cache the sqlType() into class, because it's heavy used in `toInternal`.
'''
def _cachedSqlType(cls):<fim_suffix><fim_middle> | apache/spark | if not hasattr(cls, '_cached_sql_type'):
cls._cached_sql_type = cls.sqlType()
return cls._cached_sql_type |
29 | # complete multi-line body codes for that function<fim_prefix>#
'''
Converts the table into a dict.
Return as an dict
:param recursive: turns the nested Row as dict (default: False).
>>> Row(name="Alice", age=11).asDict() == {'name': 'Alice', 'age': 11}
True
>>> row = Row(key=1, value=... | apache/spark | if not hasattr(self, '__fields__'):
raise TypeError('Cannot convert a Row class into dict')
if recursive:
def conv(obj):
if isinstance(obj, Row):
return obj.asDict(True)
elif isinstance(obj, list):
return [conv(o) for o in obj]
elif isinstance(obj, dict):
... |
30 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns the summary of the LinearRegressionModel.
Gets summary (e.g. residuals, mse, r-squared ) of model on
training set. An exception is thrown if
`trainingSummary is None`.
'''
def summary(self):<fim_suffix><fim_middle> | apache/spark | if self.hasSummary:
return LinearRegressionTrainingSummary(super(LinearRegressionModel, self).summary)
else:
raise RuntimeError('No training summary available for this %s' % self.__class__.__name__) |
31 | # complete multi-line body codes for that function<fim_prefix>#
'''
Evaluates the model on a test dataset.
Evaluates the model on a test dataset.
:param dataset:
Test dataset to evaluate model on, where dataset is an
instance of :py:class:`pyspark.sql.DataFrame`
'''
def evaluate(self, datas... | apache/spark | if not isinstance(dataset, DataFrame):
raise ValueError('dataset must be a DataFrame but got %s.' % type(dataset))
java_lr_summary = self._call_java('evaluate', dataset)
return LinearRegressionSummary(java_lr_summary) |
32 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns a GeneralizedLinearRegressionTrainingSummary object for this training set.
Gets summary (e.g. residuals, deviance, pValues) of model on
training set. An exception is thrown if
`trainingSummary is None`.
'''
def summary(self):<fi... | apache/spark | if self.hasSummary:
return GeneralizedLinearRegressionTrainingSummary(super(GeneralizedLinearRegressionModel, self).summary)
else:
raise RuntimeError('No training summary available for this %s' % self.__class__.__name__) |
33 | # complete multi-line body codes for that function<fim_prefix>#
'''
Evaluates the model on a test dataset.
Evaluates the model on a test dataset.
:param dataset:
Test dataset to evaluate model on, where dataset is an
instance of :py:class:`pyspark.sql.DataFrame`
'''
def evaluate(self, datas... | apache/spark | if not isinstance(dataset, DataFrame):
raise ValueError('dataset must be a DataFrame but got %s.' % type(dataset))
java_glr_summary = self._call_java('evaluate', dataset)
return GeneralizedLinearRegressionSummary(java_glr_summary) |
34 | # complete multi-line body codes for that function<fim_prefix>#
'''
Get all the directories that are local
Get all the directories
'''
def _get_local_dirs(sub):<fim_suffix><fim_middle> | apache/spark | path = os.environ.get('SPARK_LOCAL_DIRS', '/tmp')
dirs = path.split(',')
if len(dirs) > 1:
rnd = random.Random(os.getpid() + id(dirs))
random.shuffle(dirs, rnd.random)
return [os.path.join(d, 'python', str(os.getpid()), sub) for d in dirs] |
35 | # complete multi-line body codes for that function<fim_prefix>#
'''
Choose one directory for spill by number n
Choose one directory for spill by number n
'''
def _get_spill_dir(self, n):<fim_suffix><fim_middle> | apache/spark | return os.path.join(self.localdirs[n % len(self.localdirs)], str(n)) |
36 | # complete multi-line body codes for that function<fim_prefix>#
'''
Combine the items by creator and combiner
Combine the items by creator and combiner
'''
def mergeValues(self, iterator):<fim_suffix><fim_middle> | apache/spark | (creator, comb) = (self.agg.createCombiner, self.agg.mergeValue)
(c, data, pdata, hfun, batch) = (0, self.data, self.pdata, self._partition, self.batch)
limit = self.memory_limit
for (k, v) in iterator:
d = pdata[hfun(k)] if pdata else data
d[k] = comb(d[k], v) if k in d else creator(v)
c += 1
if c >= b... |
37 | # complete multi-line body codes for that function<fim_prefix>#
'''
Merge a set of keys and values by merging them into a single object.
Merge (K,V) pair by mergeCombiner
'''
def mergeCombiners(self, iterator, limit=None):<fim_suffix><fim_middle> | apache/spark | if limit is None:
limit = self.memory_limit
(comb, hfun, objsize) = (self.agg.mergeCombiners, self._partition, self._object_size)
(c, data, pdata, batch) = (0, self.data, self.pdata, self.batch)
for (k, v) in iterator:
d = pdata[hfun(k)] if pdata else data
d[k] = comb(d[k], v) if k in d else v
if not li... |
38 | # complete multi-line body codes for that function<fim_prefix>#
'''
This function will dump already partitioned data into disks. It will dump the data into the disks and the memory used by the memory.
dump already partitioned data into disks.
It will dump the data in batch for better performance.
'''
def _spil... | apache/spark | global MemoryBytesSpilled, DiskBytesSpilled
path = self._get_spill_dir(self.spills)
if not os.path.exists(path):
os.makedirs(path)
used_memory = get_used_memory()
if not self.pdata:
streams = [open(os.path.join(path, str(i)), 'wb') for i in range(self.partitions)]
for (k, v) in self.data.items():
h ... |
39 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return all items as iterator
Return all merged items as iterator
'''
def items(self):<fim_suffix><fim_middle> | apache/spark | if not self.pdata and (not self.spills):
return iter(self.data.items())
return self._external_items() |
40 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return all partitioned items as iterator
Return all partitioned items as iterator
'''
def _external_items(self):<fim_suffix><fim_middle> | apache/spark | assert not self.data
if any(self.pdata):
self._spill()
self.pdata = []
try:
for i in range(self.partitions):
for v in self._merged_items(i):
yield v
self.data.clear()
for j in range(self.spills):
path = self._get_spill_dir(j)
os.remove(os.path.join(pat... |
41 | # complete multi-line body codes for that function<fim_prefix>#
'''
Merge the partitioned items and return the as iterator
merge the partitioned items and return the as iterator
If one partition can not be fit in memory, then them will be
partitioned and merged recursively.
'''
def _recursive_m... | apache/spark | subdirs = [os.path.join(d, 'parts', str(index)) for d in self.localdirs]
m = ExternalMerger(self.agg, self.memory_limit, self.serializer, subdirs, self.scale * self.partitions, self.partitions, self.batch)
m.pdata = [{} for _ in range(self.partitions)]
limit = self._next_limit()
for j in range(self.spills):
path = ... |
42 | # complete multi-line body codes for that function<fim_prefix>#
'''
Choose one directory for spill by number n
Choose one directory for spill by number n
'''
def _get_path(self, n):<fim_suffix><fim_middle> | apache/spark | d = self.local_dirs[n % len(self.local_dirs)]
if not os.path.exists(d):
os.makedirs(d)
return os.path.join(d, str(n)) |
43 | # complete multi-line body codes for that function<fim_prefix>#
'''
Sort the elements in iterator do external sort when the memory is below the limit.
Sort the elements in iterator, do external sort when the memory
goes above the limit.
'''
def sorted(self, iterator, key=None, reverse=False):<fim_suffix><fim_mi... | apache/spark | global MemoryBytesSpilled, DiskBytesSpilled
(batch, limit) = (100, self._next_limit())
(chunks, current_chunk) = ([], [])
iterator = iter(iterator)
while True:
chunk = list(itertools.islice(iterator, batch))
current_chunk.extend(chunk)
if len(chunk) < batch:
break
used_memory = get_used_memory()... |
44 | # complete multi-line body codes for that function<fim_prefix>#
'''
dump the values into disk
dump the values into disk
'''
def _spill(self):<fim_suffix><fim_middle> | apache/spark | global MemoryBytesSpilled, DiskBytesSpilled
if self._file is None:
self._open_file()
used_memory = get_used_memory()
pos = self._file.tell()
self._ser.dump_stream(self.values, self._file)
self.values = []
gc.collect()
DiskBytesSpilled += self._file.tell() - pos
MemoryBytesSpilled += max(used_memory - get_used_memor... |
45 | # complete multi-line body codes for that function<fim_prefix>#
'''
Dump already partitioned data into disks.
dump already partitioned data into disks.
'''
def _spill(self):<fim_suffix><fim_middle> | apache/spark | global MemoryBytesSpilled, DiskBytesSpilled
path = self._get_spill_dir(self.spills)
if not os.path.exists(path):
os.makedirs(path)
used_memory = get_used_memory()
if not self.pdata:
streams = [open(os.path.join(path, str(i)), 'wb') for i in range(self.partitions)]
self._sorted = len(self.data) < self.SORT_K... |
46 | # complete multi-line body codes for that function<fim_prefix>#
'''
Load a partition from disk then sort and group by key
load a partition from disk, then sort and group by key
'''
def _merge_sorted_items(self, index):<fim_suffix><fim_middle> | apache/spark | def load_partition(j):
path = self._get_spill_dir(j)
p = os.path.join(path, str(index))
with open(p, 'rb', 65536) as f:
for v in self.serializer.load_stream(f):
yield v
disk_items = [load_partition(j) for j in range(self.spills)]
if self._sorted:
sorted_items = heapq.merge(disk_items... |
47 | # complete multi-line body codes for that function<fim_prefix>#
'''
This function is called by the worker process.
Called by a worker process after the fork().
'''
def worker(sock, authenticated):<fim_suffix><fim_middle> | apache/spark | signal.signal(SIGHUP, SIG_DFL)
signal.signal(SIGCHLD, SIG_DFL)
signal.signal(SIGTERM, SIG_DFL)
signal.signal(SIGINT, signal.default_int_handler)
infile = os.fdopen(os.dup(sock.fileno()), 'rb', 65536)
outfile = os.fdopen(os.dup(sock.fileno()), 'wb', 65536)
if not authenticated:
client_secret = UTF8Deserializer().loa... |
48 | # complete multi-line body codes for that function<fim_prefix>#
'''
This function returns consistent hash code for builtin types and tuple with None.
This function returns consistent hash code for builtin types, especially
for None and tuple with None.
The algorithm is similar to that one used by CPython 2.7
... | apache/spark | if sys.version_info >= (3, 2, 3) and 'PYTHONHASHSEED' not in os.environ:
raise Exception('Randomness of hash of string should be disabled via PYTHONHASHSEED')
if x is None:
return 0
if isinstance(x, tuple):
h = 3430008
for i in x:
h ^= portable_hash(i)
h *= 1000003
h &= sys.maxsi... |
49 | # complete multi-line body codes for that function<fim_prefix>#
'''
Parse a memory string in the format supported by Java and return the value in MiB.
Parse a memory string in the format supported by Java (e.g. 1g, 200m) and
return the value in MiB
>>> _parse_memory("256m")
256
>>> _parse_memory("2g")
... | apache/spark | units = {'g': 1024, 'm': 1, 't': 1 << 20, 'k': 1.0 / 1024}
if s[-1].lower() not in units:
raise ValueError('invalid format: ' + s)
return int(float(s[:-1]) * units[s[-1].lower()]) |
50 | # complete multi-line body codes for that function<fim_prefix>#
'''
Ignore the u prefix of string in doc tests
Ignore the 'u' prefix of string in doc tests, to make it works
in both python 2 and 3
'''
def ignore_unicode_prefix(f):<fim_suffix><fim_middle> | apache/spark | if sys.version >= '3':
literal_re = re.compile("(\\W|^)[uU](['])", re.UNICODE)
f.__doc__ = literal_re.sub('\\1\\2', f.__doc__)
return f |
51 | # complete multi-line body codes for that function<fim_prefix>#
'''
Persist this RDD with the default storage level.
Persist this RDD with the default storage level (C{MEMORY_ONLY}).
'''
def cache(self):<fim_suffix><fim_middle> | apache/spark | self.is_cached = True
self.persist(StorageLevel.MEMORY_ONLY)
return self |
52 | # complete multi-line body codes for that function<fim_prefix>#
'''
Set this RDD s storage level to persist its values across operations
.
Set this RDD's storage level to persist its values across operations
after the first time it is computed. This can only be used to assign
a new storage level ... | apache/spark | self.is_cached = True
javaStorageLevel = self.ctx._getJavaStorageLevel(storageLevel)
self._jrdd.persist(javaStorageLevel)
return self |
53 | # complete multi-line body codes for that function<fim_prefix>#
'''
Mark the RDD as non - persistent and remove all blocks for the current entry from memory and disk.
Mark the RDD as non-persistent, and remove all blocks for it from
memory and disk.
.. versionchanged:: 3.0.0
Added optional a... | apache/spark | self.is_cached = False
self._jrdd.unpersist(blocking)
return self |
54 | # complete multi-line body codes for that function<fim_prefix>#
'''
Gets the name of the file to which this RDD was checkpointed.
Gets the name of the file to which this RDD was checkpointed
Not defined if RDD is checkpointed locally.
'''
def getCheckpointFile(self):<fim_suffix><fim_middle> | apache/spark | checkpointFile = self._jrdd.rdd().getCheckpointFile()
if checkpointFile.isDefined():
return checkpointFile.get() |
55 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD by applying a function to each element of this RDD.
Return a new RDD by applying a function to each element of this RDD.
>>> rdd = sc.parallelize(["b", "a", "c"])
>>> sorted(rdd.map(lambda x: (x, 1)).collect())
... | apache/spark | def func(_, iterator):
return map(fail_on_stopiteration(f), iterator)
return self.mapPartitionsWithIndex(func, preservesPartitioning) |
56 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD by first applying a function to all elements of this RDD and then flattening the results.
Return a new RDD by first applying a function to all elements of this
RDD, and then flattening the results.
>>> rdd = sc.paralle... | apache/spark | def func(s, iterator):
return chain.from_iterable(map(fail_on_stopiteration(f), iterator))
return self.mapPartitionsWithIndex(func, preservesPartitioning) |
57 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
>>> rdd = sc.parallelize([1, 2, 3, 4], 2)
>>> def f(iterator): yield sum(iterator)
>>... | apache/spark | def func(s, iterator):
return f(iterator)
return self.mapPartitionsWithIndex(func, preservesPartitioning) |
58 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD by applying a function to each partition of this RDD while tracking the index of the original partition.
Deprecated: use mapPartitionsWithIndex instead.
Return a new RDD by applying a function to each partition of this RDD,
... | apache/spark | warnings.warn('mapPartitionsWithSplit is deprecated; use mapPartitionsWithIndex instead', DeprecationWarning, stacklevel=2)
return self.mapPartitionsWithIndex(f, preservesPartitioning) |
59 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return an RDD containing the distinct elements in this RDD.
Return a new RDD containing the distinct elements in this RDD.
>>> sorted(sc.parallelize([1, 1, 2, 3]).distinct().collect())
[1, 2, 3]
'''
def distinct(self, numPartitions=Non... | apache/spark | return self.map(lambda x: (x, None)).reduceByKey(lambda x, _: x, numPartitions).map(lambda x: x[0]) |
60 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD with the specified fraction of the total number of elements in this RDD.
Return a sampled subset of this RDD.
:param withReplacement: can elements be sampled multiple times (replaced when sampled out)
:param fraction: ... | apache/spark | assert fraction >= 0.0, 'Negative fraction value: %s' % fraction
return self.mapPartitionsWithIndex(RDDSampler(withReplacement, fraction, seed).func, True) |
61 | # complete multi-line body codes for that function<fim_prefix>#
'''
Randomly splits this RDD with the provided weights.
Randomly splits this RDD with the provided weights.
:param weights: weights for splits, will be normalized if they don't sum to 1
:param seed: random seed
:return: split RDDs ... | apache/spark | s = float(sum(weights))
cweights = [0.0]
for w in weights:
cweights.append(cweights[-1] + w / s)
if seed is None:
seed = random.randint(0, 2 ** 32 - 1)
return [self.mapPartitionsWithIndex(RDDRangeSampler(lb, ub, seed).func, True) for (lb, ub) in zip(cweights, cweights[1:])] |
62 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a fixed - size sampled subset of this RDD.
Return a fixed-size sampled subset of this RDD.
.. note:: This method should only be used if the resulting array is expected
to be small, as all the data is loaded into the driver's... | apache/spark | numStDev = 10.0
if num < 0:
raise ValueError('Sample size cannot be negative.')
elif num == 0:
return []
initialCount = self.count()
if initialCount == 0:
return []
rand = random.Random(seed)
if not withReplacement and num >= initialCount:
samples = self.collect()
rand.shuffle(samples)
return sa... |
63 | # complete multi-line body codes for that function<fim_prefix>#
'''
Compute the sampling rate for a specific sample size.
Returns a sampling rate that guarantees a sample of
size >= sampleSizeLowerBound 99.99% of the time.
How the sampling rate is determined:
Let p = num / total, where num is t... | apache/spark | fraction = float(sampleSizeLowerBound) / total
if withReplacement:
numStDev = 5
if sampleSizeLowerBound < 12:
numStDev = 9
return fraction + numStDev * sqrt(fraction / total)
else:
delta = 5e-05
gamma = -log(delta) / total
return min(1, fraction + gamma + sqrt(gamma * gamma + 2 * gamma *... |
64 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the union of this RDD and another RDD.
Return the union of this RDD and another one.
>>> rdd = sc.parallelize([1, 1, 2, 3])
>>> rdd.union(rdd).collect()
[1, 1, 2, 3, 1, 1, 2, 3]
'''
def union(self, other):<fim_suffix><fi... | apache/spark | if self._jrdd_deserializer == other._jrdd_deserializer:
rdd = RDD(self._jrdd.union(other._jrdd), self.ctx, self._jrdd_deserializer)
else:
self_copy = self._reserialize()
other_copy = other._reserialize()
rdd = RDD(self_copy._jrdd.union(other_copy._jrdd), self.ctx, self.ctx.serializer)
if self.partitione... |
65 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the intersection of this RDD and another RDD.
Return the intersection of this RDD and another one. The output will
not contain any duplicate elements, even if the input RDDs did.
.. note:: This method performs a shuffle internal... | apache/spark | return self.map(lambda v: (v, None)).cogroup(other.map(lambda v: (v, None))).filter(lambda k_vs: all(k_vs[1])).keys() |
66 | # complete multi-line body codes for that function<fim_prefix>#
'''
Repartition the RDD according to the given partitioner and within each resulting partition sort records by their keys.
Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys.
... | apache/spark | if numPartitions is None:
numPartitions = self._defaultReducePartitions()
memory = _parse_memory(self.ctx._conf.get('spark.python.worker.memory', '512m'))
serializer = self._jrdd_deserializer
def sortPartition(iterator):
sort = ExternalSorter(memory * 0.9, serializer).sorted
return iter(sort(iterator, key=... |
67 | # complete multi-line body codes for that function<fim_prefix>#
'''
Sorts this RDD by key.
Sorts this RDD, which is assumed to consist of (key, value) pairs.
>>> tmp = [('a', 1), ('b', 2), ('1', 3), ('d', 4), ('2', 5)]
>>> sc.parallelize(tmp).sortByKey().first()
('1', 3)
>>> sc.parallel... | apache/spark | if numPartitions is None:
numPartitions = self._defaultReducePartitions()
memory = self._memory_limit()
serializer = self._jrdd_deserializer
def sortPartition(iterator):
sort = ExternalSorter(memory * 0.9, serializer).sorted
return iter(sort(iterator, key=lambda kv: keyfunc(kv[0]), reverse=not ascending))
... |
68 | # complete multi-line body codes for that function<fim_prefix>#
'''
Sorts this RDD by the given keyfunc.
Sorts this RDD by the given keyfunc
>>> tmp = [('a', 1), ('b', 2), ('1', 3), ('d', 4), ('2', 5)]
>>> sc.parallelize(tmp).sortBy(lambda x: x[0]).collect()
[('1', 3), ('2', 5), ('a', 1), ('b',... | apache/spark | return self.keyBy(keyfunc).sortByKey(ascending, numPartitions).values() |
69 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the Cartesian product of this RDD and another RDD.
Return the Cartesian product of this RDD and another one, that is, the
RDD of all pairs of elements C{(a, b)} where C{a} is in C{self} and
C{b} is in C{other}.
>>> rdd =... | apache/spark | deserializer = CartesianDeserializer(self._jrdd_deserializer, other._jrdd_deserializer)
return RDD(self._jrdd.cartesian(other._jrdd), self.ctx, deserializer) |
70 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return an RDD of grouped items by a function.
Return an RDD of grouped items.
>>> rdd = sc.parallelize([1, 1, 2, 3, 5, 8])
>>> result = rdd.groupBy(lambda x: x % 2).collect()
>>> sorted([(x, sorted(y)) for (x, y) in result])
... | apache/spark | return self.map(lambda x: (f(x), x)).groupByKey(numPartitions, partitionFunc) |
71 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return an RDD of strings from a shell command.
Return an RDD created by piping elements to a forked external process.
>>> sc.parallelize(['1', '2', '', '3']).pipe('cat').collect()
[u'1', u'2', u'', u'3']
:param checkCode: whet... | apache/spark | if env is None:
env = dict()
def func(iterator):
pipe = Popen(shlex.split(command), env=env, stdin=PIPE, stdout=PIPE)
def pipe_objs(out):
for obj in iterator:
s = unicode(obj).rstrip('\n') + '\n'
out.write(s.encode('utf-8'))
out.close()
Thread(target=pipe_objs, ... |
72 | # complete multi-line body codes for that function<fim_prefix>#
'''
Applies a function to all elements of this RDD.
Applies a function to all elements of this RDD.
>>> def f(x): print(x)
>>> sc.parallelize([1, 2, 3, 4, 5]).foreach(f)
'''
def foreach(self, f):<fim_suffix><fim_middle> | apache/spark | f = fail_on_stopiteration(f)
def processPartition(iterator):
for x in iterator:
f(x)
return iter([])
self.mapPartitions(processPartition).count() |
73 | # complete multi-line body codes for that function<fim_prefix>#
'''
Applies a function to each partition of this RDD.
Applies a function to each partition of this RDD.
>>> def f(iterator):
... for x in iterator:
... print(x)
>>> sc.parallelize([1, 2, 3, 4, 5]).foreachPartit... | apache/spark | def func(it):
r = f(it)
try:
return iter(r)
except TypeError:
return iter([])
self.mapPartitions(func).count() |
74 | # complete multi-line body codes for that function<fim_prefix>#
'''
Returns a list containing all of the elements in this RDD.
Return a list that contains all of the elements in this RDD.
.. note:: This method should only be used if the resulting array is expected
to be small, as all the data is lo... | apache/spark | with SCCallSiteSync(self.context) as css:
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
return list(_load_from_socket(sock_info, self._jrdd_deserializer)) |
75 | # complete multi-line body codes for that function<fim_prefix>#
'''
Reduces the elements of this RDD using the specified commutative and an associative binary operator. Currently reduces partitions locally.
Reduces the elements of this RDD using the specified commutative and
associative binary operator. Current... | apache/spark | f = fail_on_stopiteration(f)
def func(iterator):
iterator = iter(iterator)
try:
initial = next(iterator)
except StopIteration:
return
yield reduce(f, iterator, initial)
vals = self.mapPartitions(func).collect()
if vals:
return reduce(f, vals)
raise ValueError('Can not reduce() empty... |
76 | # complete multi-line body codes for that function<fim_prefix>#
'''
Reduces the elements of this RDD in a multi - level tree pattern.
Reduces the elements of this RDD in a multi-level tree pattern.
:param depth: suggested depth of the tree (default: 2)
>>> add = lambda x, y: x + y
>>> rdd = sc... | apache/spark | if depth < 1:
raise ValueError('Depth cannot be smaller than 1 but got %d.' % depth)
zeroValue = (None, True)
def op(x, y):
if x[1]:
return y
elif y[1]:
return x
else:
return (f(x[0], y[0]), False)
reduced = self.map(lambda x: (x, False)).treeAggregate(zeroValue, op, op, depth)
... |
77 | # complete multi-line body codes for that function<fim_prefix>#
'''
Folds the elements of each partition into a single value.
Aggregate the elements of each partition, and then the results for all
the partitions, using a given associative function and a neutral "zero value."
The function C{op(t1, t2)} ... | apache/spark | op = fail_on_stopiteration(op)
def func(iterator):
acc = zeroValue
for obj in iterator:
acc = op(acc, obj)
yield acc
vals = self.mapPartitions(func).collect()
return reduce(op, vals, zeroValue) |
78 | # complete multi-line body codes for that function<fim_prefix>#
'''
Aggregate the elements of each partition and then the results for all the partitions using a given combine functions and a neutral zeroValue value.
Aggregate the elements of each partition, and then the results for all
the partitions, using a g... | apache/spark | seqOp = fail_on_stopiteration(seqOp)
combOp = fail_on_stopiteration(combOp)
def func(iterator):
acc = zeroValue
for obj in iterator:
acc = seqOp(acc, obj)
yield acc
vals = self.mapPartitions(func).collect()
return reduce(combOp, vals, zeroValue) |
79 | # complete multi-line body codes for that function<fim_prefix>#
'''
This function aggregates the elements of this RDD in a multi - level tree.
Aggregates the elements of this RDD in a multi-level tree
pattern.
:param depth: suggested depth of the tree (default: 2)
>>> add = lambda x, y: x + y
... | apache/spark | if depth < 1:
raise ValueError('Depth cannot be smaller than 1 but got %d.' % depth)
if self.getNumPartitions() == 0:
return zeroValue
def aggregatePartition(iterator):
acc = zeroValue
for obj in iterator:
acc = seqOp(acc, obj)
yield acc
partiallyAggregated = self.mapPartitions(aggregatePar... |
80 | # complete multi-line body codes for that function<fim_prefix>#
'''
Find the maximum item in this RDD.
Find the maximum item in this RDD.
:param key: A function used to generate key for comparing
>>> rdd = sc.parallelize([1.0, 5.0, 43.0, 10.0])
>>> rdd.max()
43.0
>>> rdd.max(ke... | apache/spark | if key is None:
return self.reduce(max)
return self.reduce(lambda a, b: max(a, b, key=key)) |
81 | # complete multi-line body codes for that function<fim_prefix>#
'''
Find the minimum item in this RDD.
Find the minimum item in this RDD.
:param key: A function used to generate key for comparing
>>> rdd = sc.parallelize([2.0, 5.0, 43.0, 10.0])
>>> rdd.min()
2.0
>>> rdd.min(key... | apache/spark | if key is None:
return self.reduce(min)
return self.reduce(lambda a, b: min(a, b, key=key)) |
82 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the sum of the elements in this RDD.
Add up the elements in this RDD.
>>> sc.parallelize([1.0, 2.0, 3.0]).sum()
6.0
'''
def sum(self):<fim_suffix><fim_middle> | apache/spark | return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add) |
83 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD with the mean variance
and count of the elements in one operation.
Return a L{StatCounter} object that captures the mean, variance
and count of the RDD's elements in one operation.
'''
def stats(self):<fim_suffix><fim_m... | apache/spark | def redFunc(left_counter, right_counter):
return left_counter.mergeStats(right_counter)
return self.mapPartitions(lambda i: [StatCounter(i)]).reduce(redFunc) |
84 | # complete multi-line body codes for that function<fim_prefix>#
'''
Compute a histogram of the given buckets.
Compute a histogram using the provided buckets. The buckets
are all open to the right except for the last which is closed.
e.g. [1,10,20,50] means the buckets are [1,10) [10,20) [20,50],
... | apache/spark | if isinstance(buckets, int):
if buckets < 1:
raise ValueError('number of buckets must be >= 1')
def comparable(x):
if x is None:
return False
if type(x) is float and isnan(x):
return False
return True
filtered = self.filter(comparable)
def minmax... |
85 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the count of each unique value in this RDD as a dictionary of
= > count
Return the count of each unique value in this RDD as a dictionary of
(value, count) pairs.
>>> sorted(sc.parallelize([1, 2, 1, 2, 2], 2).countByValu... | apache/spark | def countPartition(iterator):
counts = defaultdict(int)
for obj in iterator:
counts[obj] += 1
yield counts
def mergeMaps(m1, m2):
for (k, v) in m2.items():
m1[k] += v
return m1
return self.mapPartitions(countPartition).reduce(mergeMaps) |
86 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return the top N elements from an RDD.
Get the top N elements from an RDD.
.. note:: This method should only be used if the resulting array is expected
to be small, as all the data is loaded into the driver's memory.
.. no... | apache/spark | def topIterator(iterator):
yield heapq.nlargest(num, iterator, key=key)
def merge(a, b):
return heapq.nlargest(num, a + b, key=key)
return self.mapPartitions(topIterator).reduce(merge) |
87 | # complete multi-line body codes for that function<fim_prefix>#
'''
Take the N elements from an RDD ordered in ascending order or as
is specified by the optional key function.
Get the N elements from an RDD ordered in ascending order or as
specified by the optional key function.
.. note:: t... | apache/spark | def merge(a, b):
return heapq.nsmallest(num, a + b, key)
return self.mapPartitions(lambda it: [heapq.nsmallest(num, it, key)]).reduce(merge) |
88 | # complete multi-line body codes for that function<fim_prefix>#
'''
Take the first num elements of the RDD.
Take the first num elements of the RDD.
It works by first scanning one partition, and use the results from
that partition to estimate the number of additional partitions needed
to satisfy... | apache/spark | items = []
totalParts = self.getNumPartitions()
partsScanned = 0
while len(items) < num and partsScanned < totalParts:
numPartsToTry = 1
if partsScanned > 0:
if len(items) == 0:
numPartsToTry = partsScanned * 4
else:
numPartsToTry = int(1.5 * num * partsScanned / len(item... |
89 | # complete multi-line body codes for that function<fim_prefix>#
'''
Save a Python RDD of key - value pairs to any Hadoop file
system using the new Hadoop OutputFormat API.
Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any Hadoop file
system, using the new Hadoop OutputFormat API (ma... | apache/spark | jconf = self.ctx._dictToJavaMap(conf)
pickledRDD = self._pickled()
self.ctx._jvm.PythonRDD.saveAsHadoopDataset(pickledRDD._jrdd, True, jconf, keyConverter, valueConverter, True) |
90 | # complete multi-line body codes for that function<fim_prefix>#
'''
Save the current RDD to a new Hadoop file using the new API.
Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any Hadoop file
system, using the new Hadoop OutputFormat API (mapreduce package). Key and value types
will ... | apache/spark | jconf = self.ctx._dictToJavaMap(conf)
pickledRDD = self._pickled()
self.ctx._jvm.PythonRDD.saveAsNewAPIHadoopFile(pickledRDD._jrdd, True, path, outputFormatClass, keyClass, valueClass, keyConverter, valueConverter, jconf) |
91 | # complete multi-line body codes for that function<fim_prefix>#
'''
Save the current RDD to a sequence file.
Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any Hadoop file
system, using the L{org.apache.hadoop.io.Writable} types that we convert from the
RDD's key and value types. The... | apache/spark | pickledRDD = self._pickled()
self.ctx._jvm.PythonRDD.saveAsSequenceFile(pickledRDD._jrdd, True, path, compressionCodecClass) |
92 | # complete multi-line body codes for that function<fim_prefix>#
'''
Save this RDD as a PickleFile.
Save this RDD as a SequenceFile of serialized objects. The serializer
used is L{pyspark.serializers.PickleSerializer}, default batch size
is 10.
>>> tmpFile = NamedTemporaryFile(delete=True)
... | apache/spark | if batchSize == 0:
ser = AutoBatchedSerializer(PickleSerializer())
else:
ser = BatchedSerializer(PickleSerializer(), batchSize)
self._reserialize(ser)._jrdd.saveAsObjectFile(path) |
93 | # complete multi-line body codes for that function<fim_prefix>#
'''
Save this RDD as a text file using string representations of elements.
Save this RDD as a text file, using string representations of elements.
@param path: path to text file
@param compressionCodecClass: (None by default) string i.e.
... | apache/spark | def func(split, iterator):
for x in iterator:
if not isinstance(x, (unicode, bytes)):
x = unicode(x)
if isinstance(x, unicode):
x = x.encode('utf-8')
yield x
keyed = self.mapPartitionsWithIndex(func)
keyed._bypass_serializer = True
if compressionCodecClass:
compre... |
94 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new RDD with the values for each key using an associative and commutative reduce function.
Merge the values for each key using an associative and commutative reduce function.
This will also perform the merging locally on each mapper b... | apache/spark | return self.combineByKey(lambda x: x, func, func, numPartitions, partitionFunc) |
95 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new DStream with the values for each key using an associative and commutative reduce function.
Merge the values for each key using an associative and commutative reduce function, but
return the results immediately to the master as a di... | apache/spark | func = fail_on_stopiteration(func)
def reducePartition(iterator):
m = {}
for (k, v) in iterator:
m[k] = func(m[k], v) if k in m else v
yield m
def mergeMaps(m1, m2):
for (k, v) in m2.items():
m1[k] = func(m1[k], v) if k in m1 else v
return m1
return self.mapPartitions(reducePartiti... |
96 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a copy of the RDD partitioned by the specified partitioner.
Return a copy of the RDD partitioned using the specified partitioner.
>>> pairs = sc.parallelize([1, 2, 3, 4, 2, 4, 1]).map(lambda x: (x, x))
>>> sets = pairs.partition... | apache/spark | if numPartitions is None:
numPartitions = self._defaultReducePartitions()
partitioner = Partitioner(numPartitions, partitionFunc)
if self.partitioner == partitioner:
return self
outputSerializer = self.ctx._unbatched_serializer
limit = _parse_memory(self.ctx._conf.get('spark.python.worker.memory', '512m')) / 2
... |
97 | # complete multi-line body codes for that function<fim_prefix>#
'''
This function returns an RDD of elements from the first entry in the RDD that are combined with the second entry in the RDD.
Generic function to combine the elements for each key using a custom
set of aggregation functions.
Turns an RD... | apache/spark | if numPartitions is None:
numPartitions = self._defaultReducePartitions()
serializer = self.ctx.serializer
memory = self._memory_limit()
agg = Aggregator(createCombiner, mergeValue, mergeCombiners)
def combineLocally(iterator):
merger = ExternalMerger(agg, memory * 0.9, serializer)
merger.mergeValues(itera... |
98 | # complete multi-line body codes for that function<fim_prefix>#
'''
Aggregate the values of each key using given combine functions and a neutral
zero value.
Aggregate the values of each key, using given combine functions and a neutral
"zero value". This function can return a different result type, U, th... | apache/spark | def createZero():
return copy.deepcopy(zeroValue)
return self.combineByKey(lambda v: seqFunc(createZero(), v), seqFunc, combFunc, numPartitions, partitionFunc) |
99 | # complete multi-line body codes for that function<fim_prefix>#
'''
Return a new table with the values for each key in the table grouped by func.
Merge the values for each key using an associative function "func"
and a neutral "zeroValue" which may be added to the result an
arbitrary number of times, an... | apache/spark | def createZero():
return copy.deepcopy(zeroValue)
return self.combineByKey(lambda v: func(createZero(), v), func, func, numPartitions, partitionFunc) |
End of preview.
README.md exists but content is empty.
- Downloads last month
- 60