text stringlengths 81 112k |
|---|
Creates the objects from the JSON response.
def build(self):
"""
Creates the objects from the JSON response.
"""
if self.json['sys']['type'] == 'Array':
return self._build_array()
return self._build_item(self.json) |
Creates a resource with a given ID (optional) and attributes for the current content type.
def create(self, resource_id=None, attributes=None):
"""
Creates a resource with a given ID (optional) and attributes for the current content type.
"""
return self.proxy.create(resource_id=resour... |
Finds a single resource by ID related to the current space.
def find(self, resource_id, query=None):
"""
Finds a single resource by ID related to the current space.
"""
return self.proxy.find(resource_id, query=query) |
Returns ngroups count if it was specified in the query, otherwise ValueError.
If grouping on more than one field, provide the field argument to specify which count you are looking for.
def get_ngroups(self, field=None):
'''
Returns ngroups count if it was specified in the query, otherwise Valu... |
Returns 'matches' from group response.
If grouping on more than one field, provide the field argument to specify which count you are looking for.
def get_groups_count(self, field=None):
'''
Returns 'matches' from group response.
If grouping on more than one field, prov... |
Flattens the group response and just returns a list of documents.
def get_flat_groups(self, field=None):
'''
Flattens the group response and just returns a list of documents.
'''
field = field if field else self._determine_group_field(field)
temp_groups = self.data['grouped'][fi... |
Returns a dictionary of facets::
>>> res = solr.query('SolrClient_unittest',{
'q':'product_name:Lorem',
'facet':True,
'facet.field':'facet_test',
})... ... ... ...
>>> res.get_results_count()
4
>>> r... |
Returns query facet ranges ::
>>> res = solr.query('SolrClient_unittest',{
'q':'*:*',
'facet':True,
'facet.range':'price',
'facet.range.start':0,
'facet.range.end':100,
'facet.range.gap':10
})
... |
Parses facet pivot response. Example::
>>> res = solr.query('SolrClient_unittest',{
'q':'*:*',
'fq':'price:[50 TO *]',
'facet':True,
'facet.pivot':'facet_test,price' #Note how there is no space between fields. They are just separated by commas
})
... |
:param str field: The name of the field for which to pull in values.
Will parse the query results (must be ungrouped) and return all values of 'field' as a list. Note that these are not unique values. Example::
>>> r.get_field_values_as_list('product_name_exact')
['Mauris risus risus l... |
:param str field: The name of the field for lookup.
Goes through all documents returned looking for specified field. At first encounter will return the field's value.
def get_first_field_values_as_list(self, field):
'''
:param str field: The name of the field for lookup.
Goes through ... |
:param str field: Name of facet field to retrieve values from.
Returns facet values as list for a given field. Example::
>>> res = solr.query('SolrClient_unittest',{
'q':'*:*',
'facet':'true',
'facet.field':'facet_test',
})
>>... |
:param str field: Name of facet field to retrieve keys from.
Similar to get_facet_values_as_list but returns the list of keys as a list instead.
Example::
>>> r.get_facet_keys_as_list('facet_test')
['Lorem', 'ipsum', 'amet,', 'dolor', 'sit']
def get_facet_keys_as_list(self,fie... |
EXPERIMENTAL
Tried to kick back the json.fact output.
def json_facet(self, field=None):
'''
EXPERIMENTAL
Tried to kick back the json.fact output.
'''
facets = self.data['facets']
if field is None:
temp_fields = [x for x in facets.keys() if x != 'cou... |
EXPERIMENTAL
Takes facets and returns then as a dictionary that is easier to work with,
for example, if you are getting something this::
{'facets': {'count': 50,
'test': {'buckets': [{'count': 10,
'pr': {'buckets': [{'count': 2, 'unique': 1, 'val': 79},
... |
Generates a random file name based on self._output_filename_pattern for the output to do file.
def _gen_file_name(self):
'''
Generates a random file name based on self._output_filename_pattern for the output to do file.
'''
date = datetime.datetime.now()
dt = "{}-{}-{}-{}-{}-{}-... |
Takes a string, dictionary or list of items for adding to queue. To help troubleshoot it will output the updated buffer size, however when the content gets written it will output the file path of the new file. Generally this can be safely discarded.
:param <dict,list> item: Item to add to the queue. If dict wi... |
Locks, or returns False if already locked
def _lock(self):
'''
Locks, or returns False if already locked
'''
if not self._is_locked():
with open(self._lck,'w') as fh:
if self._devel: self.logger.debug("Locking")
fh.write(str(os.getpid()))
... |
Checks to see if we are already pulling items from the queue
def _is_locked(self):
'''
Checks to see if we are already pulling items from the queue
'''
if os.path.isfile(self._lck):
try:
import psutil
except ImportError:
return Tru... |
Unlocks the index
def _unlock(self):
'''
Unlocks the index
'''
if self._devel: self.logger.debug("Unlocking Index")
if self._is_locked():
os.remove(self._lck)
return True
else:
return True |
Returns a list of the the full path to all items currently in the todo directory. The items will be listed in ascending order based on filesystem time.
This will re-scan the directory on each execution.
Do not use this to process items, this method should only be used for troubleshooting or something a... |
Returns an iterator that will provide each item in the todo queue. Note that to complete each item you have to run complete method with the output of this iterator.
That will move the item to the done directory and prevent it from being retrieved in the future.
def get_todo_items(self, **kwargs):
'''
... |
Marks the item as complete by moving it to the done directory and optionally gzipping it.
def complete(self, filepath):
'''
Marks the item as complete by moving it to the done directory and optionally gzipping it.
'''
if not os.path.exists(filepath):
raise FileNotFoundError(... |
Will index the queue into a specified solr instance and collection. Specify multiple threads to make this faster, however keep in mind that if you specify multiple threads the items may not be in order.
Example::
solr = SolrClient('http://localhost:8983/solr/')
for doc in self.docs:
... |
Gets all data from the todo files in indexq and returns one huge list of all data.
def get_all_json_from_indexq(self):
'''
Gets all data from the todo files in indexq and returns one huge list of all data.
'''
files = self.get_all_as_list()
out = []
for efile in files:
... |
This helps indexq operate in multiprocessing environment without each process having to have it's own IndexQ. It also is a handy way to deal with thread / process safety.
This method will create and return a JoinableQueue object. Additionally, it will kick off a back end process that will monitor the queue, de... |
Internal mechanism to try to send data to multiple Solr Hosts if
the query fails on the first one.
def _retry(function):
"""
Internal mechanism to try to send data to multiple Solr Hosts if
the query fails on the first one.
"""
def inner(self, **kwargs):
las... |
:param str collection: The name of the collection for the request
:param bool openSearcher: If new searcher is to be opened
:param bool softCommit: SoftCommit
:param bool waitServer: Blocks until the new searcher is opened
:param bool commit: Commit
Sends a commit to a Solr coll... |
:param str collection: The name of the collection for the request
:param str request_handler: Request handler, default is 'select'
:param dict query: Python dictionary of Solr query parameters.
Sends a query to Solr, returns a dict. `query` should be a dictionary of solr request handler argumen... |
:param str collection: The name of the collection for the request
:param str request_handler: Request handler, default is 'select'
:param dict query: Python dictonary of Solr query parameters.
Sends a query to Solr, returns a SolrResults Object. `query` should be a dictionary of solr request ha... |
:param str collection: The name of the collection for the request.
:param docs list docs: List of dicts. ex: [{"title": "testing solr indexing", "id": "test1"}]
:param min_rf int min_rf: Required number of replicas to write to'
Sends supplied list of dicts to solr for indexing. ::
... |
:param str collection: The name of the collection for the request.
:param data str data: Valid Solr JSON as a string. ex: '[{"title": "testing solr indexing", "id": "test1"}]'
:param min_rf int min_rf: Required number of replicas to write to'
Sends supplied json to solr for indexing, supplied J... |
:param str collection: The name of the collection for the request
:param str doc_id: ID of the document to be retrieved.
Retrieve document from Solr based on the ID. ::
>>> solr.get('SolrClient_unittest','changeme')
def get(self, collection, doc_id, **kwargs):
"""
:param s... |
:param str collection: The name of the collection for the request
:param tuple doc_ids: ID of the document to be retrieved.
Retrieve documents from Solr based on the ID. ::
>>> solr.get('SolrClient_unittest','changeme')
def mget(self, collection, doc_ids, **kwargs):
"""
:p... |
:param str collection: The name of the collection for the request
:param str id: ID of the document to be deleted. Can specify '*' to delete everything.
Deletes items from Solr based on the ID. ::
>>> solr.delete_doc_by_id('SolrClient_unittest','changeme')
def delete_doc_by_id(self, colle... |
:param str collection: The name of the collection for the request
:param str query: Query selecting documents to be deleted.
Deletes items from Solr based on a given query. ::
>>> solr.delete_doc_by_query('SolrClient_unittest','*:*')
def delete_doc_by_query(self, collection, query, **kwar... |
:param str collection: The name of the collection for the request
:param str filename: String file path of the file to index.
Will index specified file into Solr. The `file` must be local to the server, this is faster than other indexing options.
If the files are already on the servers I sugges... |
:param str collection: The name of the collection for the request.
:param dict query: Dictionary of solr args.
:param int rows: Number of rows to return in each batch. Default is 1000.
:param int start: What position to start with. Default is 0.
:param int max_start: Once the start will ... |
:param str collection: The name of the collection for the request.
:param dict query: Dictionary of solr args.
Will page through the result set in increments using cursorMark until it has all items. Sort is required for cursorMark \
queries, if you don't specify it, the default is 'id desc'.
... |
You can change this function to get the shard-map from somewhere/somehow place else in conjuction with
save_shard_map().
def get_shard_map(self, force_refresh=False):
"""
You can change this function to get the shard-map from somewhere/somehow place else in conjuction with
save_shard_ma... |
Will attempt to telnet to each zookeeper that is used by SolrClient and issue 'mntr' command. Response is parsed to check to see if the
zookeeper node is a leader or a follower and returned as a dict.
If the telnet collection fails or the proper response is not parsed, the zk node will be listed as '... |
Copies collection configs into a new folder. Can be used to create new collections based on existing configs.
Basically, copies all nodes under /configs/original to /configs/new.
:param original str: ZK name of original config
:param new str: New name of the ZK config.
def copy_config(self, ... |
Downloads ZK Directory to the FileSystem.
:param collection str: Name of the collection (zk config name)
:param fs_path str: Destination filesystem path.
def download_collection_configs(self, collection, fs_path):
'''
Downloads ZK Directory to the FileSystem.
:param collection... |
Uploads collection configurations from a specified directory to zookeeper.
def upload_collection_configs(self, collection, fs_path):
'''
Uploads collection configurations from a specified directory to zookeeper.
'''
coll_path = fs_path
if not os.path.isdir(coll_path):
... |
Creates a new field in managed schema, will raise ValueError if the field already exists. field_dict should look like this::
{
"name":"sell-by",
"type":"tdate",
"stored":True
}
Reference: https://cwiki.apache.org/confluence/display/so... |
Deletes a field from the Solr Collection. Will raise ValueError if the field doesn't exist.
:param string collection: Name of the collection for the action
:param string field_name: String name of the field.
def delete_field(self,collection,field_name):
'''
Deletes a field from the Sol... |
Checks if the field exists will return a boolean True (exists) or False(doesn't exist).
:param string collection: Name of the collection for the action
:param string field_name: String name of the field.
def does_field_exist(self,collection,field_name):
'''
Checks if the field exists w... |
Creates a copy field.
copy_dict should look like ::
{'source':'source_field_name','dest':'destination_field_name'}
:param string collection: Name of the collection for the action
:param dict copy_field: Dictionary of field info
Reference: https://cwiki.apache.org/confluen... |
Deletes a copy field.
copy_dict should look like ::
{'source':'source_field_name','dest':'destination_field_name'}
:param string collection: Name of the collection for the action
:param dict copy_field: Dictionary of field info
def delete_copy_field(self, collection, copy_dict):
... |
Shuffle hosts so we don't always query the first one.
Example: using in a webapp with X processes in Y servers, the hosts contacted will be more random.
The user can also call this function to reshuffle every 'x' seconds or before every request.
:return:
def shuffle_hosts(self):
"""
... |
Starts virtual display which will be
destroyed after test execution will be end
*Arguments:*
- width: a width to be set in pixels
- height: a height to be set in pixels
- color_depth: a color depth to be used
- kwargs: extra parameters
*Example:*
| Sta... |
Sends a request to Solr Collections API.
Documentation is here: https://cwiki.apache.org/confluence/display/solr/Collections+API
:param string action: Name of the collection for the action
:param dict args: Dictionary of specific parameters for action
def api(self, action, args=None):
... |
Returns a slightly slimmed down version of the clusterstatus api command. It also gets count of documents in each shard on each replica and returns
it as doc_count key for each replica.
def clusterstatus(self):
"""
Returns a slightly slimmed down version of the clusterstatus api command. It als... |
Create a new collection.
def create(self, name, numShards, params=None):
"""
Create a new collection.
"""
if params is None:
params = {}
params.update(
name=name,
numShards=numShards
)
return self.api('CREATE', params) |
Queries each core to get individual counts for each core for each shard.
def _get_collection_counts(self, core_data):
"""
Queries each core to get individual counts for each core for each shard.
"""
if core_data['base_url'] not in self.solr_clients:
from SolrClient import So... |
Checks status of each collection and shard to make sure that:
a) Cluster state is active
b) Number of docs matches across replicas for a given shard.
Returns a dict of results for custom alerting.
def check_status(self, ignore=(), status=None):
"""
Checks status of each coll... |
Starts Reindexing Process. All parameter arguments will be passed down to the getter function.
:param string fq: FilterQuery to pass to source Solr to retrieve items. This can be used to limit the results.
def reindex(self, fq= [], **kwargs):
'''
Starts Reindexing Process. All parameter argu... |
Method for retrieving batch data from Solr.
def _from_solr(self, fq=[], report_frequency = 25):
'''
Method for retrieving batch data from Solr.
'''
cursor = '*'
stime = datetime.now()
query_count = 0
while True:
#Get data with starting cursorM... |
Removes ignore fields from the data that we got from Solr.
def _trim_fields(self, docs):
'''
Removes ignore fields from the data that we got from Solr.
'''
for doc in docs:
for field in self._ignore_fields:
if field in doc:
del(doc[... |
Query tempalte for source Solr, sorts by id by default.
def _get_query(self, cursor):
'''
Query tempalte for source Solr, sorts by id by default.
'''
query = {'q':'*:*',
'sort':'id desc',
'rows':self._rows,
'cursorMark':cursor}
... |
Sends data to a Solr instance.
def _to_solr(self, data):
'''
Sends data to a Solr instance.
'''
return self._dest.index_json(self._dest_coll, json.dumps(data,sort_keys=True)) |
Gets counts of items per specified date range.
:param collection: Solr Collection to use.
:param timespan: Solr Date Math compliant value for faceting ex HOUR, MONTH, DAY
def _get_date_range_query(self, start_date, end_date, timespan= 'DAY', date_field= None):
'''
Gets counts of ite... |
This method is used to get start and end dates for the collection.
def _get_edge_date(self, date_field, sort):
'''
This method is used to get start and end dates for the collection.
'''
return self._source.query(self._source_coll, {
'q':'*:*',
'rows... |
Returns Range Facet counts based on
def _get_date_facet_counts(self, timespan, date_field, start_date=None, end_date=None):
'''
Returns Range Facet counts based on
'''
if 'DAY' not in timespan:
raise ValueError("At this time, only DAY date range increment is supported. ... |
This method may help if the original run was interrupted for some reason. It will only work under the following conditions
* You have a date field that you can facet on
* Indexing was stopped for the duration of the copy
The way this tries to resume re-indexing is by running a date range fa... |
Negotiate a new SSH2 session as a client. This is the first step after
creating a new L{Transport}. A separate thread is created for protocol
negotiation.
If an event is passed in, this method returns immediately. When
negotiation is done (successful or not), the given C{Event} will
... |
Negotiate a new SSH2 session as a server. This is the first step after
creating a new L{Transport} and setting up your server host key(s). A
separate thread is created for protocol negotiation.
If an event is passed in, this method returns immediately. When
negotiation is done (succe... |
Close this session, and any open channels that are tied to it.
def close(self):
"""
Close this session, and any open channels that are tied to it.
"""
if not self.active:
return
self.active = False
self.packetizer.close()
self.join()
for chan ... |
Request a new channel back to the client, of type C{"forwarded-tcpip"}.
This is used after a client has requested port forwarding, for sending
incoming connections back to the client.
@param src_addr: originator's address
@param src_port: originator's port
@param dest_addr: loca... |
Ask the server to forward TCP connections from a listening port on
the server, across this SSH session.
If a handler is given, that handler is called from a different thread
whenever a forwarded connection arrives. The handler parameters are::
handler(channel, (origin_addr, origin... |
Send a junk packet across the encrypted link. This is sometimes used
to add "noise" to a connection to confuse would-be attackers. It can
also be used as a keep-alive for long lived connections traversing
firewalls.
@param bytes: the number of random bytes to send in the payload of th... |
Force this session to switch to new keys. Normally this is done
automatically after the session hits a certain number of packets or
bytes sent or received, but this method gives you the option of forcing
new keys whenever you want. Negotiating new keys causes a pause in
traffic both wa... |
Turn on/off keepalive packets (default is off). If this is set, after
C{interval} seconds without sending any data over the connection, a
"keepalive" packet will be sent (and ignored by the remote host). This
can be useful to keep connections alive over a NAT, for example.
@param inte... |
Negotiate an SSH2 session, and optionally verify the server's host key
and authenticate using a password or private key. This is a shortcut
for L{start_client}, L{get_remote_server_key}, and
L{Transport.auth_password} or L{Transport.auth_publickey}. Use those
methods if you want more c... |
Try to authenticate to the server using no authentication at all.
This will almost always fail. It may be useful for determining the
list of authentication types supported by the server, by catching the
L{BadAuthenticationType} exception raised.
@param username: the username to authent... |
send a message, but block if we're in key negotiation. this is used
for user-initiated requests.
def _send_user_message(self, data):
"""
send a message, but block if we're in key negotiation. this is used
for user-initiated requests.
"""
start = time.time()
whi... |
used by a kex object to set the K (root key) and H (exchange hash)
def _set_K_H(self, k, h):
"used by a kex object to set the K (root key) and H (exchange hash)"
self.K = k
self.H = h
if self.session_id == None:
self.session_id = h |
id is 'A' - 'F' for the various keys used by ssh
def _compute_key(self, id, nbytes):
"id is 'A' - 'F' for the various keys used by ssh"
m = Message()
m.add_mpint(self.K)
m.add_bytes(self.H)
m.add_byte(id)
m.add_bytes(self.session_id)
out = sofar = SHA.new(str(m))... |
announce to the other side that we'd like to negotiate keys, and what
kind of key negotiation we support.
def _send_kex_init(self):
"""
announce to the other side that we'd like to negotiate keys, and what
kind of key negotiation we support.
"""
self.clear_to_send_lock.a... |
primtive attempt at prime generation
def _generate_prime(bits, rng):
"primtive attempt at prime generation"
hbyte_mask = pow(2, bits % 8) - 1
while True:
# loop catches the case where we increment n into a higher bit-range
x = rng.read((bits+7) // 8)
if hbyte_mask > 0:
x... |
returns a random # from 0 to N-1
def _roll_random(rng, n):
"returns a random # from 0 to N-1"
bits = util.bit_length(n-1)
bytes = (bits + 7) // 8
hbyte_mask = pow(2, bits % 8) - 1
# so here's the plan:
# we fetch as many random bits as we'd need to fit N-1, and if the
# generated number is... |
Write an SSH2-format private key file in a form that can be read by
ssh or openssh. If no password is given, the key is written in
a trivially-encoded format (base64) which is completely insecure. If
a password is given, DES-EDE3-CBC is used.
@param tag: C{"RSA"} or C{"DSA"}, the tag ... |
Set an event on this buffer. When data is ready to be read (or the
buffer has been closed), the event will be set. When no data is
ready, the event will be cleared.
@param event: the event to set/clear
@type event: Event
def set_event(self, event):
"""
Set an ... |
Feed new data into this pipe. This method is assumed to be called
from a separate thread, so synchronization is done.
@param data: the data to add
@type data: str
def feed(self, data):
"""
Feed new data into this pipe. This method is assumed to be called
from ... |
Read up to C{length} bytes from this file, starting at position
C{offset}. The offset may be a python long, since SFTP allows it
to be 64 bits.
If the end of the file has been reached, this method may return an
empty string to signify EOF, or it may also return L{SFTP_EOF}.
Th... |
Write C{data} into this file at position C{offset}. Extending the
file past its original end is expected. Unlike python's normal
C{write()} methods, this method cannot do a partial write: it must
write all of C{data} or else return an error.
The default implementation checks for an at... |
Return the canonical form of a path on the server. For example,
if the server's home folder is C{/home/foo}, the path
C{"../betty"} would be canonicalized to C{"/home/betty"}. Note
the obvious security issues: if you're serving files only from a
specific folder, you probably don't want... |
Method automatically called by the run() method of the AgentProxyThread
def connect(self):
"""
Method automatically called by the run() method of the AgentProxyThread
"""
if ('SSH_AUTH_SOCK' in os.environ) and (sys.platform != 'win32'):
conn = socket.socket(socket.AF_UNIX, s... |
Save the host keys back to a file. Only the host keys loaded with
L{load_host_keys} (plus any added directly) will be saved -- not any
host keys loaded with L{load_system_host_keys}.
@param filename: the filename to save to
@type filename: str
@raise IOError: if the file could... |
Execute a command on the SSH server. A new L{Channel} is opened and
the requested command is executed. The command's input and output
streams are returned as python C{file}-like objects representing
stdin, stdout, and stderr.
@param command: the command to execute
@type comman... |
Try, in order:
- The key passed in, if one was passed in.
- Any key we can find through an SSH agent (if allowed).
- Any "id_rsa" or "id_dsa" key discoverable in ~/.ssh/ (if allowed).
- Plain username/password auth, if a password was given.
(The password might b... |
Return the next C{n} bytes of the Message, without decomposing into
an int, string, etc. Just the raw bytes are returned.
@return: a string of the next C{n} bytes of the Message, or a string
of C{n} zero bytes, if there aren't C{n} bytes remaining.
@rtype: string
def get_bytes(sel... |
Add an integer to the stream.
@param n: integer to add
@type n: int
def add_int(self, n):
"""
Add an integer to the stream.
@param n: integer to add
@type n: int
"""
self.packet.write(struct.pack('>I', n))
return self |
Add a string to the stream.
@param s: string to add
@type s: str
def add_string(self, s):
"""
Add a string to the stream.
@param s: string to add
@type s: str
"""
self.add_int(len(s))
self.packet.write(s)
return self |
Resize the pseudo-terminal. This can be used to change the width and
height of the terminal emulation created in a previous L{get_pty} call.
@param width: new width (in characters) of the terminal screen
@type width: int
@param height: new height (in characters) of the terminal screen
... |
Return the exit status from the process on the server. This is
mostly useful for retrieving the reults of an L{exec_command}.
If the command hasn't finished yet, this method will wait until
it does, or until the channel is closed. If no exit status is
provided by the server, -1 is retu... |
Send the exit status of an executed command to the client. (This
really only makes sense in server mode.) Many clients expect to
get some sort of status code back from an executed command after
it completes.
@param status: the exit code of the process
@type status: int... |
Receive data from the channel. The return value is a string
representing the data received. The maximum amount of data to be
received at once is specified by C{nbytes}. If a string of length zero
is returned, the channel stream has closed.
@param nbytes: maximum number of bytes to re... |
Send data to the channel, without allowing partial results. Unlike
L{send}, this method continues to send data from the given string until
either all data has been sent or an error occurs. Nothing is returned.
@param s: data to send.
@type s: str
@raise socket.timeout: if sen... |
Send data to the channel's "stderr" stream, without allowing partial
results. Unlike L{send_stderr}, this method continues to send data
from the given string until all data has been sent or an error occurs.
Nothing is returned.
@param s: data to send to the client as "stderr" o... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.