text stringlengths 1.46k 56.1k |
|---|
.ml.mm:.qml.mm
.ml.mmt:.qml.mmx[`rflip]
.ml.mtm:.qml.mmx[`lflip]
.ml.minv:.qml.minv
.ml.mlsq:{.qml.mlsqx[`flip;y;x]}
.ml.dot:.qml.dot
.ml.mdet:.qml.mdet
.ml.mchol:.qml.mchol
.fmincg.dot:.qml.dot
================================================================================
FILE: funq_randomforest.q
SIZE: 1,085 char... |
Implementing trend indicators in kdb+ΒΆ
The compactness of kdb+ and the terseness of q focus code on a small number of high-performing native built-in functions rather than extensive libraries. kdb+ users often develop libraries of their own domain-specific algorithms and functions, for convenience and to support reuse.... |
Working with MATLABΒΆ
InstallationΒΆ
Versions
As MATLAB/datafeed toolbox evolves features or instruction below are subject to revisions. Please refer to toolbox documentation for latest version. Users have reported that this works with more recent versions (e.g. R2015b on RHEL 6.8/2016b and 2017a on macOS).
See also comm... |
Frequently-asked questions from the k4 listboxΒΆ
If you notice a question that is asked more then once on the k4 list, please feel free to add it here.
Where can I find archives of the k4 list?ΒΆ
Archives are available to subscribers at the Topicbox. When you follow that link, you will be asked for your e-mail address an... |
Geospatial indexingΒΆ
This demo shows the basics of geospatial indexing with q.
A 1-million-point random data set is queried from the HTML map client. Click on the map to see nearby points.
Download KxSystems/kdb/e/geo.zip and run:
$ make -C s2
$ q q/geo.q
$ open html/geo.html
This should then open a browser, connect to... |
Reference architecture for AzureΒΆ
Lift and shift your kdb+ plants to the cloud and leverage virtual machines (VM) with storage
kdb Insights provides a range of tools to build, manage and deploy kdb+ applications in the cloud.
kdb Insights supports:
- interfaces for deployment and common βDevopsβ orchestration tools suc... |
kdb+ and FIX messagingΒΆ
Electronic trading volumes have increased significantly in recent years, prompting financial institutions, both buy and sell side, to invest in increasingly sophisticated Order Management Systems (OMS). OMSs efficiently manage the execution of orders using a set of pre-defined conditions to obta... |
if[not[app.describeOnly] and not app.passOnly; / Only want to print this when running to see results
.tst.callbacks.expecRan:{[s;e];
app.expectationsRan+:1;
r:e[`result];
if[r ~ `pass; app.expectationsPassed+:1];
if[r in `testFail`fuzzFail; app.expectationsFailed+:1];
if[r like "*Error"; app.expectationsErro... |
replay:{[tabs;realsubs;schemalist;logfilelist]
// realsubs is a dict of `subtabs`errtabs`instrs
// schemalist is a list of (tablename;schema)
// logfilelist is a list of (log count; logfile)
.lg.o[`subscribe;"replaying the log file(s)"];
// store the orig version of upd
origupd:@[value;`..upd;{{[x;y]}}];
... |
Dictionary programsΒΆ
From GeeksforGeeks Python Programming Examples
Follow links to the originals for more details on the problem and Python solutions.
Sort dictionary by keys or valuesΒΆ
Sort keys ascendingΒΆ
>>> kv = {2:'56', 1:'2', 5:'12', 4:'24', 6:'18', 3:'323'}
>>> sorted(kv.keys())
[1, 2, 3, 4, 5, 6]
q)kv:2 1 4 5 ... |
// @kind function
// @category utility
// @desc Retrieve previous generated model from disk
// @param config {dictionary} Information about a previous run of AutoML
// including the feature extraction procedure used and the best model
// produced
// @returns {table} Features produced using config feature extracti... |
.dotz.set[`.z.pw;p0[`pw;value .dotz.getcommand[`.z.pw];;]];
.dotz.set[`.z.po;p1[`po;value .dotz.getcommand[`.z.po];]];
.dotz.set[`.z.pc;p1[`pc;value .dotz.getcommand[`.z.pc];]];
.dotz.set[`.z.wo;p1[`wo;value .dotz.getcommand[`.z.wo];]];
.dotz.set[`.z.wc;p1[`wc;value .dotz.getcommand[`.z.wc];]];
.dotz.set[`.z.ws;p2... |
defeps:(!) . flip (
(L2R_LR;0.01);
(L2R_L2LOSS_SVC;0.01);
(L2R_L2LOSS_SVR;0.001);
(L2R_L2LOSS_SVC_DUAL;0.1);
(L2R_L1LOSS_SVC_DUAL;0.1);
(MCSVM_CS;0.1);
(L2R_LR_DUAL;0.1);
(L1R_L2LOSS_SVC;0.01);
(L1R_LR;0.01);
(L2R_L1LOSS_SVR_DUAL;0.1);
(L2R_L2LOSS_SVR_DUAL;0.1))
defparam:{[prob;param]
if[0f>=param`eps;para... |
Q Code Pretraining Corpus
This dataset provides a corpus of Q programming language code and documentation, curated for pretraining large language models and code models.
π Dataset Overview
- Total Data: Over 1.6 million Q tokens, 5+ million characters
- Documents: 342 training chunks, 39 validation chunks
- Source Types:
- Open-source Q repositories (MIT/Apache 2.0 licenses)
- Official KDB+/Q documentation and tutorials
- Hand-curated code snippets and scripts
- Format: Cleaned, deduplicated, chunked for efficient pretraining
π― Key Features
- Q-Only: All data is pure Q language (no mixed Python or non-code noise)
- Permissive Licensing: All source code is MIT or Apache 2.0, suitable for both research and commercial use
- Coverage: Includes code from analytics, time-series, database queries, and utilities
- Filtered & Scored: LLM-assisted quality scoring plus manual review for top-tier data fidelity
- Chunked & Ready: Delivered as 4k-token chunks for immediate use with Hugging Face, TRL, or custom pipelines
ποΈ Dataset Structure
Each record is a text chunk, containing code or documentation in Q.
Splits:
train: Main corpus for pretraining (342 chunks)validation: Holdout set for evaluation (39 chunks)
Sample record:
{
"text": str # Raw Q code or documentation chunk
}
π§βπ» Usage
Loading the Dataset
from datasets import load_dataset
# Load the full Q pretraining dataset
dataset = load_dataset("morganstanley/q_pretrained_dataset")
# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]
Example: Previewing Data
sample = dataset["train"][0]
print(sample["text"])
Training Usage
This dataset is designed for language model pretraining using next-token prediction or masked language modeling objectives.
Supports efficient training with Hugging Face Transformers, TRL, or custom frameworks.
π€ About Q Programming Language
Q is a vector and array programming language developed by Kx Systems for high-performance analytics, finance, and time-series applications.
It features:
- Concise, functional, array-oriented syntax
- Powerful built-in operators for large-scale data manipulation
- Industry adoption in trading, banking, and real-time analytics
π Source Repositories
Major open-source Q repos included:
- DataIntellectTech/TorQ
- psaris/qtips
- psaris/funq
- KxSystems/ml
- finos/kdb
- LeslieGoldsmith/qprof
- jonathonmcmurray/reQ
- ...and more
All with permissive licenses (MIT or Apache 2.0).
π Data Preparation & Filtering
- Automated Scoring: Qwen-2.5-32B was used to score each file (0β10) for quality and relevance; only files scoring β₯4 were included.
- Manual Review: Additional cleaning to remove non-Q files or low-value content.
- Deduplication: Duplicate and boilerplate code removed.
π Citation
If you use this dataset in your research, please cite:
@dataset{q_pretraining_corpus_2024,
title={Q Code Pretraining Corpus},
author={Brendan Rappazzo Hogan},
year={2024},
url={https://huggingface.co/datasets/bhogan/q-pretraining-corpus},
note={Dataset for domain-adaptive pretraining of language models on the Q programming language}
}
Associated Paper: [Link to paper will be added here]
- Downloads last month
- 17