llmtwin / 2aa832db180a76e905044673bb878cbd.json
Ronels's picture
ipload
43cdddf verified
{
"repository_url": "https://github.com/ronelsolomon/cebras.git",
"owner": "ronelsolomon",
"name": "cebras.git",
"extracted_at": "2026-03-02T22:49:08.429260",
"files": {
"requirements.txt": {
"content": "cerebras-cloud-sdk>=0.1.0\npython-dotenv>=1.0.0\n",
"size": 47,
"language": "text"
},
"README.md": {
"content": "# Cerebras Chat Completion Example\n\nThis project demonstrates how to use the Cerebras SDK to generate chat completions using the Llama 4 model.\n\n## Prerequisites\n\n- Python 3.8+\n- Cerebras API key (get it from [Cerebras Console](https://console.cerebras.net/))\n\n## Setup\n\n1. Clone this repository\n2. Install the required dependencies:\n ```bash\n pip install -r requirements.txt\n ```\n3. Set your Cerebras API key as an environment variable:\n ```bash\n # On macOS/Linux\n export CEREBRAS_API_KEY='your-api-key-here'\n \n # On Windows (Command Prompt)\n set CEREBRAS_API_KEY=your-api-key-here\n \n # On Windows (PowerShell)\n $env:CEREBRAS_API_KEY='your-api-key-here'\n ```\n\n## Usage\n\nRun the chat completion example:\n\n```bash\npython cebras.py\n```\n\n## Environment Variables\n\n- `CEREBRAS_API_KEY`: Your Cerebras API key (required)\n\n## Example\n\nThe script will send a chat completion request to the Cerebras API and stream the response to the console.\n",
"size": 962,
"language": "markdown"
},
".env": {
"content": "CEREBRAS_API_KEY=\"csk-y933tjkj4d4fct3jr2xny2yp9k8xkx4kh3rk2d6e59kywpk3\"",
"size": 71,
"language": "unknown"
},
".gitattributes": {
"content": "# Auto detect text files and perform LF normalization\n* text=auto\n",
"size": 66,
"language": "unknown"
},
"cebras.py": {
"content": "import os\nfrom cerebras.cloud.sdk import Cerebras\n\nclient = Cerebras(\n # This is the default and can be omitted\n api_key=\"csk-y933tjkj4d4fct3jr2xny2yp9k8xkx4kh3rk2d6e59kywpk3\"\n #os.environ.get(\"csk-y933tjkj4d4fct3jr2xny2yp9k8xkx4kh3rk2d6e59kywpk3\")\n)\n\nstream = client.chat.completions.create(\n messages=[\n {\n \"role\": \"system\",\n \"content\": \"What is a Rock\"\n }\n ],\n model=\"llama-4-scout-17b-16e-instruct\",\n stream=True,\n max_completion_tokens=2048,\n temperature=0.2,\n top_p=1\n)\n\nfor chunk in stream:\n print(chunk.choices[0].delta.content or \"\", end=\"\")",
"size": 617,
"language": "python"
}
},
"_cache_metadata": {
"url": "https://github.com/ronelsolomon/cebras.git",
"content_type": "github",
"cached_at": "2026-03-02T22:49:08.429936",
"cache_key": "2aa832db180a76e905044673bb878cbd"
}
}