url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/751 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/751/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/751/comments | https://api.github.com/repos/ollama/ollama/issues/751/events | https://github.com/ollama/ollama/pull/751 | 1,936,267,761 | PR_kwDOJ0Z1Ps5cbxR0 | 751 | Proposal: Add zero-configuration networking support via zeroconf | {
"login": "ericrallen",
"id": 1667415,
"node_id": "MDQ6VXNlcjE2Njc0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1667415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericrallen",
"html_url": "https://github.com/ericrallen",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 6 | 2023-10-10T21:15:24 | 2024-02-20T01:49:21 | 2024-02-20T01:49:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/751",
"html_url": "https://github.com/ollama/ollama/pull/751",
"diff_url": "https://github.com/ollama/ollama/pull/751.diff",
"patch_url": "https://github.com/ollama/ollama/pull/751.patch",
"merged_at": null
} | This proposal allows the Ollama service to be made discoverable via [zero configuration networking](https://en.wikipedia.org/wiki/Zero-configuration_networking) across the user's local network via Bonjour/Zeroconf/Avahi aka [Multicast DNS (mDNS)](https://en.wikipedia.org/wiki/Multicast_DNS) using the [`zeroconf` Go lib... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/751/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6950 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6950/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6950/comments | https://api.github.com/repos/ollama/ollama/issues/6950/events | https://github.com/ollama/ollama/issues/6950 | 2,547,338,898 | I_kwDOJ0Z1Ps6X1U6S | 6,950 | Support loading concurrent model(s) on CPU when GPU is full | {
"login": "Han-Huaqiao",
"id": 41456966,
"node_id": "MDQ6VXNlcjQxNDU2OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/41456966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Han-Huaqiao",
"html_url": "https://github.com/Han-Huaqiao",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-09-25T08:33:45 | 2024-10-29T08:48:40 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I deployed the qwen2.5:72b-instruct-q6_K model, which occupies 4*3090 and a total of 75G GPU memory. When I use llama3:latest, it will not use RAM and CPU (755G/128 core), it will unload qwen2.5:72b-instruct-q6_K and load llama3:latest to GPU, even though qwen2.5:72b-instruct-q6_K is in use at... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6950/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7003 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7003/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7003/comments | https://api.github.com/repos/ollama/ollama/issues/7003/events | https://github.com/ollama/ollama/issues/7003 | 2,553,166,963 | I_kwDOJ0Z1Ps6YLjxz | 7,003 | Ollama freezes when specifying chat roles for some models. | {
"login": "lumost",
"id": 3687195,
"node_id": "MDQ6VXNlcjM2ODcxOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3687195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lumost",
"html_url": "https://github.com/lumost",
"followers_url": "https://api.github.com/users/lumost/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 3 | 2024-09-27T15:07:11 | 2024-12-14T17:09:08 | 2024-12-14T17:09:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When testing llava-llama3 on an agentic task of interpreting an image and generating an action. I specified the role of the 'environment' as 'environment'. This leads to ollama freezing in the chat dialog. Likewise, when running bakllava - ollama freezes when multiple `system` role messages are ... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7003/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3551 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3551/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3551/comments | https://api.github.com/repos/ollama/ollama/issues/3551/events | https://github.com/ollama/ollama/issues/3551 | 2,232,726,873 | I_kwDOJ0Z1Ps6FFLVZ | 3,551 | temperature multiplied by 2 | {
"login": "anasibang",
"id": 58289607,
"node_id": "MDQ6VXNlcjU4Mjg5NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/58289607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anasibang",
"html_url": "https://github.com/anasibang",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-04-09T06:51:12 | 2024-04-22T22:02:36 | 2024-04-15T19:07:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/ollama/ollama/blob/1341ee1b56b11436a9a8d72f2733ef7ff436ba40/openai/openai.go#L178
Why did you multiply the temperature value by 2? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3551/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3505 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3505/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3505/comments | https://api.github.com/repos/ollama/ollama/issues/3505/events | https://github.com/ollama/ollama/issues/3505 | 2,228,279,669 | I_kwDOJ0Z1Ps6E0Nl1 | 3,505 | installing binary on linux cluster (A100) and I get nonsense responses | {
"login": "bozo32",
"id": 102033973,
"node_id": "U_kgDOBhTqNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102033973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bozo32",
"html_url": "https://github.com/bozo32",
"followers_url": "https://api.github.com/users/bozo32/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-04-05T15:15:44 | 2024-05-18T04:06:12 | 2024-05-18T04:04:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
installation of the binary using
./ollama-linux-amd64 serve&
./ollama-linux-amd64
when I've used sinteractive to grab a GPU (a100 w 80gb)
seems to work fine on our cluster
However, the resulting install does not respect instructions. I asked mixtral chat, mixtral instruct (properly formatt... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3505/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7678 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7678/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7678/comments | https://api.github.com/repos/ollama/ollama/issues/7678/events | https://github.com/ollama/ollama/issues/7678 | 2,660,699,104 | I_kwDOJ0Z1Ps6elwvg | 7,678 | Add Nexusflow/Athene-V2-Chat and Nexusflow/Athene-V2-Agent | {
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/non... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 1 | 2024-11-15T04:41:12 | 2024-11-18T02:52:13 | 2024-11-18T02:51:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | They seem to just be based off Qwen 2.5 instruct this time | {
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/non... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7678/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7678/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1465 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1465/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1465/comments | https://api.github.com/repos/ollama/ollama/issues/1465/events | https://github.com/ollama/ollama/issues/1465 | 2,035,355,199 | I_kwDOJ0Z1Ps55UQ4_ | 1,465 | CUDA error 2: out of memory (for a 33 billion param model, but I have 39GB of VRAM available across 4 GPUs) | {
"login": "peteygao",
"id": 2184561,
"node_id": "MDQ6VXNlcjIxODQ1NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2184561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peteygao",
"html_url": "https://github.com/peteygao",
"followers_url": "https://api.github.com/users/petey... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 20 | 2023-12-11T10:37:25 | 2024-05-04T21:41:20 | 2024-05-02T21:24:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The model I'm trying to run is `deepseek-coder:33b` and `journalctl -u ollama` outputs:
```
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:292: 39320 MB VRAM available, loading up to 101 GPU layers
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:421: starting llama runner
Dec 11 18:... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1465/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5293 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5293/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5293/comments | https://api.github.com/repos/ollama/ollama/issues/5293/events | https://github.com/ollama/ollama/issues/5293 | 2,374,406,550 | I_kwDOJ0Z1Ps6NhpGW | 5,293 | openchat 8b | {
"login": "zh19990906",
"id": 59323683,
"node_id": "MDQ6VXNlcjU5MzIzNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/59323683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zh19990906",
"html_url": "https://github.com/zh19990906",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-06-26T06:17:09 | 2024-06-26T06:17:09 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | openchat/openchat-3.6-8b-20240522
https://huggingface.co/openchat/openchat_3.5 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5293/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8094 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8094/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8094/comments | https://api.github.com/repos/ollama/ollama/issues/8094/events | https://github.com/ollama/ollama/issues/8094 | 2,739,752,259 | I_kwDOJ0Z1Ps6jTU1D | 8,094 | No normalization option was provided when calling the embedding model | {
"login": "szzhh",
"id": 78521539,
"node_id": "MDQ6VXNlcjc4NTIxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szzhh",
"html_url": "https://github.com/szzhh",
"followers_url": "https://api.github.com/users/szzhh/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-12-14T10:11:26 | 2024-12-14T16:39:45 | 2024-12-14T16:39:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
As the title says, I want to use ollama to call mxbai-embed-large:latest and output a normalized vector, but ollama does not seem to support normalized=true
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.1 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8094/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1271 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1271/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1271/comments | https://api.github.com/repos/ollama/ollama/issues/1271/events | https://github.com/ollama/ollama/issues/1271 | 2,010,412,387 | I_kwDOJ0Z1Ps531HVj | 1,271 | Terminal output issues on Windows | {
"login": "clebio",
"id": 811175,
"node_id": "MDQ6VXNlcjgxMTE3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/811175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clebio",
"html_url": "https://github.com/clebio",
"followers_url": "https://api.github.com/users/clebio/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2023-11-25T01:12:21 | 2024-09-14T23:10:39 | 2024-03-12T16:30:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I saw that #1262 was merged, so I pulled main and regenerated and built the binary. It runs great, and definitely uses the GPU, now:
```
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9
```
However, the terminal interf... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1271/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6489 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6489/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6489/comments | https://api.github.com/repos/ollama/ollama/issues/6489/events | https://github.com/ollama/ollama/issues/6489 | 2,484,485,771 | I_kwDOJ0Z1Ps6UFj6L | 6,489 | Error 403 occurs when I call ollama's api | {
"login": "brownplayer",
"id": 118909356,
"node_id": "U_kgDOBxZprA",
"avatar_url": "https://avatars.githubusercontent.com/u/118909356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brownplayer",
"html_url": "https://github.com/brownplayer",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 10 | 2024-08-24T10:31:27 | 2024-08-25T01:52:50 | 2024-08-25T01:52:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Prerequisite: Use the C++ interface of ipex-llm as ollama's acceleration backend. Then start the ollama server (port 127.0.0.1.11434). When you use the edge browser plug-in to access the api of ollama, error 403 occurs

I typed ollama serve then
with port forwarding,
xxx.xxx.xxx(myserver) : 5050
then there is message saying
Ollama is running
so , on postman, I tried to send prompt to receive answer from ollama.
<img width="746" alt="스크린샷 2023-12-18 오후 9 18 12" src="https://github.com/jmorganca/ollama/a... | {
"login": "kotran88",
"id": 20656932,
"node_id": "MDQ6VXNlcjIwNjU2OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/20656932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kotran88",
"html_url": "https://github.com/kotran88",
"followers_url": "https://api.github.com/users/kot... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1580/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5685 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5685/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5685/comments | https://api.github.com/repos/ollama/ollama/issues/5685/events | https://github.com/ollama/ollama/pull/5685 | 2,407,253,786 | PR_kwDOJ0Z1Ps51TyoM | 5,685 | Disable mmap by default for Windows ROCm | {
"login": "zsmooter",
"id": 15349942,
"node_id": "MDQ6VXNlcjE1MzQ5OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/15349942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsmooter",
"html_url": "https://github.com/zsmooter",
"followers_url": "https://api.github.com/users/zsm... | [] | closed | false | null | [] | null | 1 | 2024-07-14T03:54:43 | 2024-11-23T00:55:12 | 2024-11-23T00:55:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5685",
"html_url": "https://github.com/ollama/ollama/pull/5685",
"diff_url": "https://github.com/ollama/ollama/pull/5685.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5685.patch",
"merged_at": null
} | The same as with CUDA, disabling mmap when using ROCm on windows seems to speed up model load times significantly. I get a >2x speedup in model load times on my 7900xtx when disabling mmap. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5685/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6108 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6108/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6108/comments | https://api.github.com/repos/ollama/ollama/issues/6108/events | https://github.com/ollama/ollama/pull/6108 | 2,441,050,958 | PR_kwDOJ0Z1Ps53C8lu | 6,108 | server: fix json marshalling of downloadBlobPart | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-07-31T22:30:46 | 2024-07-31T23:01:26 | 2024-07-31T23:01:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6108",
"html_url": "https://github.com/ollama/ollama/pull/6108",
"diff_url": "https://github.com/ollama/ollama/pull/6108.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6108.patch",
"merged_at": "2024-07-31T23:01:25"
} | The json marshalling of downloadBlobPart was incorrect and racey. This fixes it by implementing customer json marshalling for downloadBlobPart which correctly handles serialization of shared memory. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6108/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5353 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5353/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5353/comments | https://api.github.com/repos/ollama/ollama/issues/5353/events | https://github.com/ollama/ollama/pull/5353 | 2,379,455,173 | PR_kwDOJ0Z1Ps5z10qG | 5,353 | Draft: Support Moore Threads GPU | {
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-06-28T02:51:26 | 2024-07-09T01:42:15 | 2024-07-09T01:28:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5353",
"html_url": "https://github.com/ollama/ollama/pull/5353",
"diff_url": "https://github.com/ollama/ollama/pull/5353.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5353.patch",
"merged_at": null
} | Moore Threads, a cutting-edge GPU startup, introduces MUSA (Moore Threads Unified System Architecture) as its foundational technology. This pull request marks the initial integration of MT GPU support into Ollama, leveraging MUSA's capabilities to enhance LLM inference performance.
I am also working on integrating M... | {
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5353/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3181 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3181/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3181/comments | https://api.github.com/repos/ollama/ollama/issues/3181/events | https://github.com/ollama/ollama/issues/3181 | 2,190,114,182 | I_kwDOJ0Z1Ps6Cin2G | 3,181 | Suppressing output of all the metadata. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-03-16T16:03:51 | 2024-03-16T16:40:35 | 2024-03-16T16:40:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I want to use Ollama to serve a local LLM with OpenAI API to allow Pythogora/gpt-pilot to interact with it.
The back end constantly prints out the crap below:
```bash
{"function":"launch_slot_with_data","id_slot":0,"id_task":2842,"level":"INFO","line":1002,"msg":"slot is processing task","tid... | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3181/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7013 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7013/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7013/comments | https://api.github.com/repos/ollama/ollama/issues/7013/events | https://github.com/ollama/ollama/issues/7013 | 2,553,931,830 | I_kwDOJ0Z1Ps6YOeg2 | 7,013 | Option to Override a Model's Memory Requirements | {
"login": "dabockster",
"id": 2431938,
"node_id": "MDQ6VXNlcjI0MzE5Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2431938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dabockster",
"html_url": "https://github.com/dabockster",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXU... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-09-28T01:07:23 | 2025-01-09T14:15:35 | 2024-09-28T22:49:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was trying to load the 70b Llama3 model and Ollama says I need 33.6 GB of 30.5 GB ram. I believe this is a safety thing Meta put into the model, so I want to have the ability to override this and attempt to run it on lower amounts of memory. I know this will likely dip into swap/page file space, possibly even causing... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7013/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/91 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/91/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/91/comments | https://api.github.com/repos/ollama/ollama/issues/91/events | https://github.com/ollama/ollama/pull/91 | 1,808,594,224 | PR_kwDOJ0Z1Ps5VtvIK | 91 | fix stream errors | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-17T20:41:20 | 2023-07-20T19:25:47 | 2023-07-20T19:22:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/91",
"html_url": "https://github.com/ollama/ollama/pull/91",
"diff_url": "https://github.com/ollama/ollama/pull/91.diff",
"patch_url": "https://github.com/ollama/ollama/pull/91.patch",
"merged_at": "2023-07-20T19:22:00"
} | once the stream is created, it's too late to update response headers (i.e. status code). any and all errors must be returned by the stream | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/91/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/91/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7137 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7137/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7137/comments | https://api.github.com/repos/ollama/ollama/issues/7137/events | https://github.com/ollama/ollama/pull/7137 | 2,573,657,476 | PR_kwDOJ0Z1Ps59-RSp | 7,137 | llama: add compiler tags for cpu features | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-10-08T16:14:44 | 2024-10-17T20:43:24 | 2024-10-17T20:43:21 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7137",
"html_url": "https://github.com/ollama/ollama/pull/7137",
"diff_url": "https://github.com/ollama/ollama/pull/7137.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7137.patch",
"merged_at": "2024-10-17T20:43:21"
} | Replaces #7009 now on main
Support local builds with customized CPU flags for both the CPU runner, and GPU runners.
Some users want no vector flags in the GPU runners. Others want ~all the vector extensions enabled. Each runner we add to the official build adds significant overhead (size and build time) so this e... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7137/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2343 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2343/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2343/comments | https://api.github.com/repos/ollama/ollama/issues/2343/events | https://github.com/ollama/ollama/issues/2343 | 2,116,879,740 | I_kwDOJ0Z1Ps5-LQV8 | 2,343 | Feature Request - Support for ollama Keep alive | {
"login": "twalderman",
"id": 78627063,
"node_id": "MDQ6VXNlcjc4NjI3MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/78627063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twalderman",
"html_url": "https://github.com/twalderman",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-02-04T04:39:10 | 2024-02-20T03:57:40 | 2024-02-20T03:57:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | there is a new api parameter for keeping the model loaded. it would be great to have it as a passable parameter in the modelfile. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2343/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2343/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5777 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5777/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5777/comments | https://api.github.com/repos/ollama/ollama/issues/5777/events | https://github.com/ollama/ollama/issues/5777 | 2,416,984,810 | I_kwDOJ0Z1Ps6QEELq | 5,777 | Mistral Nemo Please! | {
"login": "stevengans",
"id": 10685309,
"node_id": "MDQ6VXNlcjEwNjg1MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/10685309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevengans",
"html_url": "https://github.com/stevengans",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 28 | 2024-07-18T17:30:36 | 2024-07-23T12:27:30 | 2024-07-22T20:34:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://mistral.ai/news/mistral-nemo/ | {
"login": "stevengans",
"id": 10685309,
"node_id": "MDQ6VXNlcjEwNjg1MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/10685309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevengans",
"html_url": "https://github.com/stevengans",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5777/reactions",
"total_count": 48,
"+1": 48,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5777/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1344 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1344/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1344/comments | https://api.github.com/repos/ollama/ollama/issues/1344/events | https://github.com/ollama/ollama/issues/1344 | 2,020,533,276 | I_kwDOJ0Z1Ps54buQc | 1,344 | Beam search (best of) for completion API | {
"login": "walking-octopus",
"id": 46994949,
"node_id": "MDQ6VXNlcjQ2OTk0OTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/46994949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/walking-octopus",
"html_url": "https://github.com/walking-octopus",
"followers_url": "https://api... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-12-01T10:03:34 | 2024-11-21T08:24:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Beam search is a sampling mechanism by which we maximize probability of not just the next token, but the entire completion.
While it can be ignored for simpler uses, any form of reasoning, especially with a tiny model, requires beam search to backtrack from incorrect steps.
llama.cpp already supports beam search,... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1344/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1344/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7574 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7574/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7574/comments | https://api.github.com/repos/ollama/ollama/issues/7574/events | https://github.com/ollama/ollama/issues/7574 | 2,643,589,468 | I_kwDOJ0Z1Ps6dkflc | 7,574 | LLaMa 3.2 90B on multi GPU crashes | {
"login": "BBOBDI",
"id": 145003778,
"node_id": "U_kgDOCKSVAg",
"avatar_url": "https://avatars.githubusercontent.com/u/145003778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BBOBDI",
"html_url": "https://github.com/BBOBDI",
"followers_url": "https://api.github.com/users/BBOBDI/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-11-08T10:28:10 | 2024-11-08T22:08:52 | 2024-11-08T22:08:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello!
My problem may be similar to issue 7568. I think there is a problem with the distribution of the LLaMa 3.2 90B model across multiple GPUs. When it runs on a single GPU (quantized), it works. But when it runs on multiple GPUs, it crashes.
In my server running Linux Debian Bookworm an... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7574/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1796 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1796/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1796/comments | https://api.github.com/repos/ollama/ollama/issues/1796/events | https://github.com/ollama/ollama/issues/1796 | 2,066,606,956 | I_kwDOJ0Z1Ps57Lets | 1,796 | Readme refers to 404 docker documentation | {
"login": "tommedema",
"id": 331833,
"node_id": "MDQ6VXNlcjMzMTgzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/331833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommedema",
"html_url": "https://github.com/tommedema",
"followers_url": "https://api.github.com/users/tomm... | [] | closed | false | null | [] | null | 2 | 2024-01-05T01:58:21 | 2024-01-05T03:23:51 | 2024-01-05T03:23:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The main [readme](https://github.com/jmorganca/ollama/blob/main/docs/README.md) refers to https://github.com/jmorganca/ollama/blob/main/docs/docker.md which gives a 404. Is docker still supported? | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1796/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4411 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4411/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4411/comments | https://api.github.com/repos/ollama/ollama/issues/4411/events | https://github.com/ollama/ollama/pull/4411 | 2,293,878,091 | PR_kwDOJ0Z1Ps5vUQLp | 4,411 | removed inconsistent punctuation | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [] | closed | false | null | [] | null | 0 | 2024-05-13T21:28:59 | 2024-05-13T22:30:46 | 2024-05-13T22:30:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4411",
"html_url": "https://github.com/ollama/ollama/pull/4411",
"diff_url": "https://github.com/ollama/ollama/pull/4411.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4411.patch",
"merged_at": "2024-05-13T22:30:46"
} | Removed the period in `ollama serve -h`
Resolves https://github.com/ollama/ollama/issues/4410
| {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4411/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5259 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5259/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5259/comments | https://api.github.com/repos/ollama/ollama/issues/5259/events | https://github.com/ollama/ollama/issues/5259 | 2,371,106,359 | I_kwDOJ0Z1Ps6NVDY3 | 5,259 | Support Multiple Types for OpenAI Completions Endpoint | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-06-24T21:14:31 | 2024-07-22T10:21:52 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Allow v1/completions to handle []string, []int and [][]int, in addition to just a string | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5259/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5259/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3741 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3741/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3741/comments | https://api.github.com/repos/ollama/ollama/issues/3741/events | https://github.com/ollama/ollama/issues/3741 | 2,251,885,195 | I_kwDOJ0Z1Ps6GOQqL | 3,741 | Please accept slow network connections when loading models | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | null | [] | null | 1 | 2024-04-19T01:24:24 | 2024-08-11T22:52:36 | 2024-08-11T22:52:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Downloading of models on slow networks stops too frequently
(base) igor@macigor ~ % ollama run llava:7b
pulling manifest
pulling 170370233dd5... 23% ▕███ ▏ 959 MB/4.1 GB 882 KB/s 59m29s
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflare... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3741/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7269 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7269/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7269/comments | https://api.github.com/repos/ollama/ollama/issues/7269/events | https://github.com/ollama/ollama/issues/7269 | 2,598,930,671 | I_kwDOJ0Z1Ps6a6Ijv | 7,269 | You want to be able to customize the OLLAMA installation directory | {
"login": "wxpid1",
"id": 45633931,
"node_id": "MDQ6VXNlcjQ1NjMzOTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/45633931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wxpid1",
"html_url": "https://github.com/wxpid1",
"followers_url": "https://api.github.com/users/wxpid1/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-10-19T09:11:13 | 2024-10-19T09:52:54 | 2024-10-19T09:52:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Sometimes my user directory space is not much, want to install software to other drive. This solves both the space issue and customizes the management of your own installed software.
I tried to use the OLLAMA variable setting, but it didn't work and the OLLAMA was installed on the C drive.
It's even better if you... | {
"login": "wxpid1",
"id": 45633931,
"node_id": "MDQ6VXNlcjQ1NjMzOTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/45633931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wxpid1",
"html_url": "https://github.com/wxpid1",
"followers_url": "https://api.github.com/users/wxpid1/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7269/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5520/comments | https://api.github.com/repos/ollama/ollama/issues/5520/events | https://github.com/ollama/ollama/pull/5520 | 2,393,773,992 | PR_kwDOJ0Z1Ps50mVFp | 5,520 | llm: add `-DBUILD_SHARED_LIBS=off` to common cpu cmake flags | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-06T22:58:08 | 2024-07-06T22:58:18 | 2024-07-06T22:58:17 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5520",
"html_url": "https://github.com/ollama/ollama/pull/5520",
"diff_url": "https://github.com/ollama/ollama/pull/5520.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5520.patch",
"merged_at": "2024-07-06T22:58:17"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5520/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7769 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7769/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7769/comments | https://api.github.com/repos/ollama/ollama/issues/7769/events | https://github.com/ollama/ollama/issues/7769 | 2,677,200,814 | I_kwDOJ0Z1Ps6fkteu | 7,769 | Request: Nexa AI Omnivision | {
"login": "mak448a",
"id": 94062293,
"node_id": "U_kgDOBZtG1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mak448a",
"html_url": "https://github.com/mak448a",
"followers_url": "https://api.github.com/users/mak448a/follow... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-11-20T21:13:41 | 2024-11-20T21:14:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Here is the link to Nexa AI's model. (I haven't checked over it to make sure it's super reputable though)
https://huggingface.co/NexaAIDev/omnivision-968M | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7769/reactions",
"total_count": 15,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7769/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3999 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3999/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3999/comments | https://api.github.com/repos/ollama/ollama/issues/3999/events | https://github.com/ollama/ollama/issues/3999 | 2,267,446,167 | I_kwDOJ0Z1Ps6HJnuX | 3,999 | could not connect to ollama app | {
"login": "ricardodddduck",
"id": 163819103,
"node_id": "U_kgDOCcOuXw",
"avatar_url": "https://avatars.githubusercontent.com/u/163819103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardodddduck",
"html_url": "https://github.com/ricardodddduck",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-04-28T09:10:01 | 2024-05-01T16:40:00 | 2024-05-01T16:40:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
could not connect to ollama app,is it running?
it always happen even reinstall ollama
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3999/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/462 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/462/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/462/comments | https://api.github.com/repos/ollama/ollama/issues/462/events | https://github.com/ollama/ollama/pull/462 | 1,879,186,581 | PR_kwDOJ0Z1Ps5ZbW3F | 462 | remove marshalPrompt which is no longer needed | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-03T18:13:11 | 2023-09-05T18:48:43 | 2023-09-05T18:48:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/462",
"html_url": "https://github.com/ollama/ollama/pull/462",
"diff_url": "https://github.com/ollama/ollama/pull/462.diff",
"patch_url": "https://github.com/ollama/ollama/pull/462.patch",
"merged_at": "2023-09-05T18:48:42"
} | llama.cpp server handles truncating input | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/462/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1039 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1039/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1039/comments | https://api.github.com/repos/ollama/ollama/issues/1039/events | https://github.com/ollama/ollama/issues/1039 | 1,982,834,018 | I_kwDOJ0Z1Ps52L6Vi | 1,039 | Fail to load Custom Models | {
"login": "tjlcast",
"id": 16621867,
"node_id": "MDQ6VXNlcjE2NjIxODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/16621867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjlcast",
"html_url": "https://github.com/tjlcast",
"followers_url": "https://api.github.com/users/tjlcas... | [] | closed | false | null | [] | null | 4 | 2023-11-08T06:23:22 | 2023-12-04T21:42:02 | 2023-12-04T21:42:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi
I want to load a custom gguf model [TheBloke/deepseek-coder-6.7B-instruct-GGUF](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF)
ModelFile is:
```
FROM ./deepseek-coder-6.7b-instruct.Q4_K_M.gguf
```
But when I do build, it reports a error for me.
```
% ollama create amodel -f ./Modelfi... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1039/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3012 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3012/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3012/comments | https://api.github.com/repos/ollama/ollama/issues/3012/events | https://github.com/ollama/ollama/issues/3012 | 2,176,710,344 | I_kwDOJ0Z1Ps6BvfbI | 3,012 | Scoop repo, NIX repo & Debian repo | {
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-03-08T20:17:50 | 2024-03-11T22:18:19 | 2024-03-11T22:18:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please try and get Ollama into the Windows Scoop package repo and try to get Ollama into the Linux Nix repo and Debian repo. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3012/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/966 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/966/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/966/comments | https://api.github.com/repos/ollama/ollama/issues/966/events | https://github.com/ollama/ollama/pull/966 | 1,973,293,095 | PR_kwDOJ0Z1Ps5eYgin | 966 | fix log | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-11-02T00:18:49 | 2023-11-02T00:49:11 | 2023-11-02T00:49:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/966",
"html_url": "https://github.com/ollama/ollama/pull/966",
"diff_url": "https://github.com/ollama/ollama/pull/966.diff",
"patch_url": "https://github.com/ollama/ollama/pull/966.patch",
"merged_at": "2023-11-02T00:49:10"
} | if there's a remainder, the log line will show the remainder instead of the actual size | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/966/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4634 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4634/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4634/comments | https://api.github.com/repos/ollama/ollama/issues/4634/events | https://github.com/ollama/ollama/issues/4634 | 2,316,942,320 | I_kwDOJ0Z1Ps6KGbvw | 4,634 | Getting Weird Response | {
"login": "Yash-1511",
"id": 82636823,
"node_id": "MDQ6VXNlcjgyNjM2ODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/82636823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yash-1511",
"html_url": "https://github.com/Yash-1511",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-05-25T11:16:29 | 2024-07-25T23:33:01 | 2024-07-25T23:33:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am using ollama on Apple Mac M2 Ultra. i am getting same problem for past two days. i delete all my models, i uninstalled ollama and reinstall again and sometimes it will work and most of the times it will get some weird response.
 than is available | {
"login": "philippstoboy",
"id": 76473104,
"node_id": "MDQ6VXNlcjc2NDczMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/76473104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philippstoboy",
"html_url": "https://github.com/philippstoboy",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | 8 | 2025-01-29T15:33:31 | 2025-01-29T21:31:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey Ollama community,
I’m reaching out for some advice on running the DeepSeek-R1 671B model with Q4 quantization on my current setup, which has 40GB of RAM. I understand that this model employs a Mixture of Experts (MoE) architecture, meaning that during inference, only a subset of the model’s parameters (approximate... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8667/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6911 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6911/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6911/comments | https://api.github.com/repos/ollama/ollama/issues/6911/events | https://github.com/ollama/ollama/issues/6911 | 2,541,219,675 | I_kwDOJ0Z1Ps6Xd-9b | 6,911 | Mixture of Agents for Ollama | {
"login": "secondtruth",
"id": 416441,
"node_id": "MDQ6VXNlcjQxNjQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/416441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/secondtruth",
"html_url": "https://github.com/secondtruth",
"followers_url": "https://api.github.com/user... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-09-22T19:16:14 | 2025-01-06T07:35:29 | 2025-01-06T07:35:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The [Mixture of Agents (MoA)](https://arxiv.org/abs/2406.04692) is an innovative approach to leveraging the collective strengths of multiple language models to enhance overall performance and capabilities of one main model (aggregator). By combining outputs from various models, each potentially excelling in different a... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6911/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2888 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2888/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2888/comments | https://api.github.com/repos/ollama/ollama/issues/2888/events | https://github.com/ollama/ollama/issues/2888 | 2,165,067,697 | I_kwDOJ0Z1Ps6BDE-x | 2,888 | Fail to load dynamic library - unicode character path | {
"login": "08183080",
"id": 51738561,
"node_id": "MDQ6VXNlcjUxNzM4NTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/51738561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/08183080",
"html_url": "https://github.com/08183080",
"followers_url": "https://api.github.com/users/081... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-03-03T01:42:38 | 2024-04-16T21:00:14 | 2024-04-16T21:00:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Error: Unable to load dynamic library: Unable to load dynamic server library: �Ҳ���ָ����ģ�顣 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2888/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3240 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3240/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3240/comments | https://api.github.com/repos/ollama/ollama/issues/3240/events | https://github.com/ollama/ollama/pull/3240 | 2,194,369,059 | PR_kwDOJ0Z1Ps5qDq2r | 3,240 | do not prompt to move the CLI on install flow if already installed | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | open | false | null | [] | null | 2 | 2024-03-19T08:48:10 | 2024-09-16T10:23:50 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3240",
"html_url": "https://github.com/ollama/ollama/pull/3240",
"diff_url": "https://github.com/ollama/ollama/pull/3240.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3240.patch",
"merged_at": null
} | If Ollama is installed via brew or the user wishes to manage their path manually they will be prompted to install the CLI when opening the Ollama Mac app. This change attempts to check if Ollama is already set in the path, and if it is found the user is not prompted to link the executable to `/usr/local/bin/ollama`.
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3240/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4852 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4852/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4852/comments | https://api.github.com/repos/ollama/ollama/issues/4852/events | https://github.com/ollama/ollama/pull/4852 | 2,337,827,714 | PR_kwDOJ0Z1Ps5xqIVb | 4,852 | Error handling load_single_document() in ingest.py | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | [] | closed | false | null | [] | null | 0 | 2024-06-06T09:45:12 | 2024-06-09T17:41:08 | 2024-06-09T17:41:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4852",
"html_url": "https://github.com/ollama/ollama/pull/4852",
"diff_url": "https://github.com/ollama/ollama/pull/4852.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4852.patch",
"merged_at": "2024-06-09T17:41:08"
} | load_single_document() handles
- corrupt files
- empty (zero byte) files
- unsupported file extensions | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4852/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2017 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2017/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2017/comments | https://api.github.com/repos/ollama/ollama/issues/2017/events | https://github.com/ollama/ollama/pull/2017 | 2,084,476,509 | PR_kwDOJ0Z1Ps5kOMkJ | 2,017 | Fix show parameters | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-16T17:20:24 | 2024-01-16T18:34:44 | 2024-01-16T18:34:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2017",
"html_url": "https://github.com/ollama/ollama/pull/2017",
"diff_url": "https://github.com/ollama/ollama/pull/2017.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2017.patch",
"merged_at": "2024-01-16T18:34:44"
} | The ShowParameters call was converting some floats into ints. This simplifies the code and adds a unit test. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2017/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1270 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1270/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1270/comments | https://api.github.com/repos/ollama/ollama/issues/1270/events | https://github.com/ollama/ollama/issues/1270 | 2,010,402,680 | I_kwDOJ0Z1Ps531E94 | 1,270 | Specify where to download and look for models | {
"login": "Talleyrand-34",
"id": 119809076,
"node_id": "U_kgDOByQkNA",
"avatar_url": "https://avatars.githubusercontent.com/u/119809076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Talleyrand-34",
"html_url": "https://github.com/Talleyrand-34",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 7 | 2023-11-25T00:59:47 | 2023-12-12T20:07:27 | 2023-11-26T01:56:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1270/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7166 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7166/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7166/comments | https://api.github.com/repos/ollama/ollama/issues/7166/events | https://github.com/ollama/ollama/issues/7166 | 2,579,797,833 | I_kwDOJ0Z1Ps6ZxJdJ | 7,166 | Qwen 2.5 72B missing stop parameter | {
"login": "bold84",
"id": 21118257,
"node_id": "MDQ6VXNlcjIxMTE4MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/21118257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bold84",
"html_url": "https://github.com/bold84",
"followers_url": "https://api.github.com/users/bold84/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 9 | 2024-10-10T20:40:02 | 2024-12-05T06:38:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Often doesn't stop generating...

PARAMETER stop <|endoftext|>
seems to be missing in the model configuration. Adding it solved the problem.
### ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7166/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5733 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5733/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5733/comments | https://api.github.com/repos/ollama/ollama/issues/5733/events | https://github.com/ollama/ollama/issues/5733 | 2,412,115,809 | I_kwDOJ0Z1Ps6Pxfdh | 5,733 | Installation on Linux fails because /usr/share/ollama does not exist. | {
"login": "richardstevenhack",
"id": 44449170,
"node_id": "MDQ6VXNlcjQ0NDQ5MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/44449170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardstevenhack",
"html_url": "https://github.com/richardstevenhack",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-07-16T22:03:08 | 2024-07-24T17:24:32 | 2024-07-16T22:56:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When you install using the install script on openSUSE Tumbleweed, the script fails because the adduser command with the -m does not create the directory /usr/share/ollama, it merely assigns that directory to the ollama user.
I had Claude Sonnet go over the install script line by line explain... | {
"login": "richardstevenhack",
"id": 44449170,
"node_id": "MDQ6VXNlcjQ0NDQ5MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/44449170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardstevenhack",
"html_url": "https://github.com/richardstevenhack",
"followers_url": "https... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5733/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2742 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2742/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2742/comments | https://api.github.com/repos/ollama/ollama/issues/2742/events | https://github.com/ollama/ollama/issues/2742 | 2,152,742,334 | I_kwDOJ0Z1Ps6AUD2- | 2,742 | How to improve ollama performance | {
"login": "gautam-fairpe",
"id": 127822235,
"node_id": "U_kgDOB55pmw",
"avatar_url": "https://avatars.githubusercontent.com/u/127822235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gautam-fairpe",
"html_url": "https://github.com/gautam-fairpe",
"followers_url": "https://api.github.com/... | [
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-02-25T12:29:35 | 2024-03-11T21:21:36 | 2024-03-11T21:21:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | current model params :
FROM llama2:13b-chat
PARAMETER temperature 0.2
PARAMETER num_ctx 4096
PARAMETER num_thread 16
PARAMETER use_mmap False
System config :
Ram 108 GB
T4 graphics card 16 gb

2. Download any gguf file and make a modelfile for the gguf file (for example, im using hermes 2 pro):
Modelfile
```
FROM "./Hermes-2-Pro-Mistr... | {
"login": "savareyhano",
"id": 32730327,
"node_id": "MDQ6VXNlcjMyNzMwMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32730327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savareyhano",
"html_url": "https://github.com/savareyhano",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4114/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/56 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/56/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/56/comments | https://api.github.com/repos/ollama/ollama/issues/56/events | https://github.com/ollama/ollama/pull/56 | 1,794,116,306 | PR_kwDOJ0Z1Ps5U8rWu | 56 | if directory cannot be resolved, do not fail | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 1 | 2023-07-07T19:28:14 | 2023-07-11T14:19:32 | 2023-07-08T03:18:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/56",
"html_url": "https://github.com/ollama/ollama/pull/56",
"diff_url": "https://github.com/ollama/ollama/pull/56.diff",
"patch_url": "https://github.com/ollama/ollama/pull/56.patch",
"merged_at": "2023-07-08T03:18:25"
} | allow for offline mode | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/56/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/56/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2782 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2782/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2782/comments | https://api.github.com/repos/ollama/ollama/issues/2782/events | https://github.com/ollama/ollama/issues/2782 | 2,156,791,878 | I_kwDOJ0Z1Ps6AjghG | 2,782 | Why the Gemma performs that bad with some simple questions? | {
"login": "brightzheng100",
"id": 1422425,
"node_id": "MDQ6VXNlcjE0MjI0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1422425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brightzheng100",
"html_url": "https://github.com/brightzheng100",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 4 | 2024-02-27T14:50:45 | 2024-05-10T01:14:33 | 2024-05-10T01:14:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Just tried `Gemma` model but not sure why it performed that bad.
```sh
ollama pull gemma:2b
ollama run gemma:2b
```
**So is it the model issue or the model hosted here is different?**
For example - this should be a common-sense question but it has no idea:
```
>>> Do you think whether human can fly? Tell ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2782/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2928 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2928/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2928/comments | https://api.github.com/repos/ollama/ollama/issues/2928/events | https://github.com/ollama/ollama/issues/2928 | 2,168,587,625 | I_kwDOJ0Z1Ps6BQgVp | 2,928 | Error: could not connect to ollama app, is it running? | {
"login": "ttkrpink",
"id": 2522889,
"node_id": "MDQ6VXNlcjI1MjI4ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2522889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttkrpink",
"html_url": "https://github.com/ttkrpink",
"followers_url": "https://api.github.com/users/ttkrp... | [] | closed | false | null | [] | null | 6 | 2024-03-05T08:18:44 | 2024-03-18T08:44:51 | 2024-03-06T22:24:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I think I removed everything, and reinstalled ollama on Ubuntu:22.04. After a “fresh” install, the command line can not connect to ollama app.
```
Ubuntu: ~ $ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%#=... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2928/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4157 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4157/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4157/comments | https://api.github.com/repos/ollama/ollama/issues/4157/events | https://github.com/ollama/ollama/issues/4157 | 2,279,259,620 | I_kwDOJ0Z1Ps6H2r3k | 4,157 | Bunny-Llama-3-8B-V | {
"login": "rawzone",
"id": 2092357,
"node_id": "MDQ6VXNlcjIwOTIzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2092357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rawzone",
"html_url": "https://github.com/rawzone",
"followers_url": "https://api.github.com/users/rawzone/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-05-05T00:41:29 | 2024-05-09T08:59:47 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would love to see: [Bunny-Llama-3-8B-V](https://huggingface.co/BAAI/Bunny-Llama-3-8B-V) included in the Ollama models.
> Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, Stab... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4157/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4157/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6602 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6602/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6602/comments | https://api.github.com/repos/ollama/ollama/issues/6602/events | https://github.com/ollama/ollama/issues/6602 | 2,502,088,293 | I_kwDOJ0Z1Ps6VItZl | 6,602 | n_ctx parameter display error | {
"login": "JinheTang",
"id": 97284834,
"node_id": "U_kgDOBcxy4g",
"avatar_url": "https://avatars.githubusercontent.com/u/97284834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JinheTang",
"html_url": "https://github.com/JinheTang",
"followers_url": "https://api.github.com/users/JinheTan... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-09-03T07:06:16 | 2024-09-03T16:24:39 | 2024-09-03T16:24:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I create model from a Modelfile:
```log
FROM mistral:latest
PARAMETER num_ctx 4096
```
and load with:
```log
ollama create mistral:latest-nctx4096 -f Modelfile
ollama run mistral:latest-nctx4096
```
setting `n_ctx` to 4096, it is discovered that on the server end `n_ctx` is wrongly displayed as `16384`:
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6602/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5045 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5045/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5045/comments | https://api.github.com/repos/ollama/ollama/issues/5045/events | https://github.com/ollama/ollama/pull/5045 | 2,353,703,341 | PR_kwDOJ0Z1Ps5ygAj8 | 5,045 | openai: do not set temperature to 0 when setting seed | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-06-14T16:24:20 | 2024-06-14T20:43:57 | 2024-06-14T20:43:56 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5045",
"html_url": "https://github.com/ollama/ollama/pull/5045",
"diff_url": "https://github.com/ollama/ollama/pull/5045.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5045.patch",
"merged_at": "2024-06-14T20:43:56"
} | `tempearture` was previously set to 0 for reproducible outputs when setting seed, however this is not required
Note https://github.com/ollama/ollama/issues/4990 is still an open issue on Nvidia/AMD GPUs
Fixes https://github.com/ollama/ollama/issues/5044 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5045/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/795 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/795/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/795/comments | https://api.github.com/repos/ollama/ollama/issues/795/events | https://github.com/ollama/ollama/issues/795 | 1,944,162,393 | I_kwDOJ0Z1Ps5z4ZBZ | 795 | Unable to download any models on Amazon Linux 2023 on EC2 | {
"login": "drnushooz",
"id": 10852951,
"node_id": "MDQ6VXNlcjEwODUyOTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10852951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drnushooz",
"html_url": "https://github.com/drnushooz",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 5 | 2023-10-16T01:15:40 | 2023-12-20T21:46:42 | 2023-10-16T20:17:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am trying to download LLama2 and Medllama2 on Amazon Linux 2023. I have verified the security group to check if it has permissions to reach outside world and it does. Ollama installation was successful by using
```
curl https://ollama.ai/install.sh | sh
```
However, when I try to run `ollama pull medllama2` it ... | {
"login": "drnushooz",
"id": 10852951,
"node_id": "MDQ6VXNlcjEwODUyOTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10852951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drnushooz",
"html_url": "https://github.com/drnushooz",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/795/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7867 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7867/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7867/comments | https://api.github.com/repos/ollama/ollama/issues/7867/events | https://github.com/ollama/ollama/issues/7867 | 2,700,131,100 | I_kwDOJ0Z1Ps6g8Lsc | 7,867 | Deepseek (various) 236b crashes on run | {
"login": "Maltz42",
"id": 20978744,
"node_id": "MDQ6VXNlcjIwOTc4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maltz42",
"html_url": "https://github.com/Maltz42",
"followers_url": "https://api.github.com/users/Maltz4... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | open | false | null | [] | null | 19 | 2024-11-27T23:00:55 | 2025-01-13T23:43:24 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Deepseek V2, V2.5, and V2-coder all crash with an OOM error when loading the 236b size. Other versions of Deepseek may as well, that's all I've tested. Hardware is dual A6000's with 48GB each.
```
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7867/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/6241 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6241/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6241/comments | https://api.github.com/repos/ollama/ollama/issues/6241/events | https://github.com/ollama/ollama/pull/6241 | 2,454,274,622 | PR_kwDOJ0Z1Ps53wWsa | 6,241 | Speech Prototype | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 7 | 2024-08-07T20:19:13 | 2025-01-21T04:07:49 | 2024-11-21T10:04:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6241",
"html_url": "https://github.com/ollama/ollama/pull/6241",
"diff_url": "https://github.com/ollama/ollama/pull/6241.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6241.patch",
"merged_at": null
} | whisper.cpp - custom ggml, wav audio
Instructions for running in md
As of now would require conversion to ggml format to run inference, would wait to see the general momentum surrounding speech-to-text models as bigger players release foundational models. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6241/reactions",
"total_count": 14,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 10,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6241/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6372 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6372/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6372/comments | https://api.github.com/repos/ollama/ollama/issues/6372/events | https://github.com/ollama/ollama/issues/6372 | 2,468,038,732 | I_kwDOJ0Z1Ps6TG0hM | 6,372 | System tray icon is empty | {
"login": "Biyakuga",
"id": 67515021,
"node_id": "MDQ6VXNlcjY3NTE1MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/67515021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Biyakuga",
"html_url": "https://github.com/Biyakuga",
"followers_url": "https://api.github.com/users/Biy... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-08-15T13:17:16 | 2024-08-15T22:31:17 | 2024-08-15T22:31:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
In the image below, you'll notice that the hover bubble for the Ollama system tray icon is empty. In contrast, other apps display the application's name when you hover over their icons.

### OS
Window... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6372/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1046 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1046/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1046/comments | https://api.github.com/repos/ollama/ollama/issues/1046/events | https://github.com/ollama/ollama/issues/1046 | 1,984,002,491 | I_kwDOJ0Z1Ps52QXm7 | 1,046 | Add flag to force CPU only (instead of only autodetecting based on OS) | {
"login": "joake",
"id": 11403993,
"node_id": "MDQ6VXNlcjExNDAzOTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11403993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joake",
"html_url": "https://github.com/joake",
"followers_url": "https://api.github.com/users/joake/follow... | [] | closed | false | null | [] | null | 9 | 2023-11-08T16:40:57 | 2024-01-30T21:16:23 | 2024-01-14T22:03:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Requesting a build flag to only use the CPU with ollama, not the GPU.
Users on MacOS models without support for Metal can only run ollama on the CPU. Currently in llama.go the function NumGPU defaults to returning 1 (default enable metal on all MacOS) and the function chooseRunners will add metal to the runners by d... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1046/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1610 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1610/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1610/comments | https://api.github.com/repos/ollama/ollama/issues/1610/events | https://github.com/ollama/ollama/issues/1610 | 2,049,237,003 | I_kwDOJ0Z1Ps56JOAL | 1,610 | Linux Mint ollama pull model | {
"login": "dwk601",
"id": 108056780,
"node_id": "U_kgDOBnDQzA",
"avatar_url": "https://avatars.githubusercontent.com/u/108056780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwk601",
"html_url": "https://github.com/dwk601",
"followers_url": "https://api.github.com/users/dwk601/follower... | [] | closed | false | null | [] | null | 3 | 2023-12-19T18:37:57 | 2023-12-19T19:23:37 | 2023-12-19T18:42:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm using Linux mint and after pulling model using ollama pull mistral I don't see any downloaded model in ollama folder in the usr directory.
Anyone knows where the model downloaded? | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1610/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/62 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/62/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/62/comments | https://api.github.com/repos/ollama/ollama/issues/62/events | https://github.com/ollama/ollama/pull/62 | 1,795,815,924 | PR_kwDOJ0Z1Ps5VCShs | 62 | llama: replace bindings with `llama_*` calls | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 2 | 2023-07-10T02:44:43 | 2023-07-11T02:34:35 | 2023-07-10T22:51:37 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/62",
"html_url": "https://github.com/ollama/ollama/pull/62",
"diff_url": "https://github.com/ollama/ollama/pull/62.diff",
"patch_url": "https://github.com/ollama/ollama/pull/62.patch",
"merged_at": null
} | Early PR to replace the C++ binding files with direct calls to llama.cpp from Go. It's missing quite a few features the C++ `binding.cpp` files had although those were almost direct copies of the `main` example in llama.cpp's repo. We should be able to add them back relatively easily. It's most likely slower as well so... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/62/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/62/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5157 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5157/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5157/comments | https://api.github.com/repos/ollama/ollama/issues/5157/events | https://github.com/ollama/ollama/issues/5157 | 2,363,401,313 | I_kwDOJ0Z1Ps6M3qRh | 5,157 | Update llama.cpp to support qwen2-57B-A14B pls | {
"login": "CoreJa",
"id": 28624864,
"node_id": "MDQ6VXNlcjI4NjI0ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/28624864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CoreJa",
"html_url": "https://github.com/CoreJa",
"followers_url": "https://api.github.com/users/CoreJa/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-06-20T02:57:01 | 2024-07-05T17:25:59 | 2024-07-05T17:25:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
llama.cpp is able to support qwen2-57B-A14B moe model now via https://github.com/ggerganov/llama.cpp/pull/7835. Please help update submodule llama.cpp to a newer version. Otherwise i quants of this model(I tried iq4_xs) would end up with a `CUDA error: CUBLAS_STATUS_NOT_INITIALIZED`.
### OS
L... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5157/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6291 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6291/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6291/comments | https://api.github.com/repos/ollama/ollama/issues/6291/events | https://github.com/ollama/ollama/pull/6291 | 2,458,500,400 | PR_kwDOJ0Z1Ps53-pxJ | 6,291 | Don't hard fail on sparse setup error | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-08-09T18:58:48 | 2024-08-09T19:30:27 | 2024-08-09T19:30:25 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6291",
"html_url": "https://github.com/ollama/ollama/pull/6291",
"diff_url": "https://github.com/ollama/ollama/pull/6291.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6291.patch",
"merged_at": "2024-08-09T19:30:25"
} | It seems this can fail in some casees, but proceed with the download anyway.
I haven't reproduced the users failure, but based on the log message and the recent regression, this seems highly likely to be the cause.
Fixes #6263 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6291/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5254 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5254/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5254/comments | https://api.github.com/repos/ollama/ollama/issues/5254/events | https://github.com/ollama/ollama/issues/5254 | 2,369,992,286 | I_kwDOJ0Z1Ps6NQzZe | 5,254 | Wrong parameters for gemma:7b | {
"login": "qzc438",
"id": 61488260,
"node_id": "MDQ6VXNlcjYxNDg4MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qzc438",
"html_url": "https://github.com/qzc438",
"followers_url": "https://api.github.com/users/qzc438/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2024-06-24T11:27:14 | 2024-07-09T16:20:23 | 2024-07-09T16:20:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Type `ollama show gemma:7b`: the parameters are 9b, not 7b.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5254/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2584 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2584/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2584/comments | https://api.github.com/repos/ollama/ollama/issues/2584/events | https://github.com/ollama/ollama/pull/2584 | 2,141,243,546 | PR_kwDOJ0Z1Ps5nOmg6 | 2,584 | Update faq.md | {
"login": "elsatch",
"id": 653433,
"node_id": "MDQ6VXNlcjY1MzQzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsatch",
"html_url": "https://github.com/elsatch",
"followers_url": "https://api.github.com/users/elsatch/fo... | [] | closed | false | null | [] | null | 2 | 2024-02-18T23:47:12 | 2024-02-20T13:56:43 | 2024-02-20T03:33:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2584",
"html_url": "https://github.com/ollama/ollama/pull/2584",
"diff_url": "https://github.com/ollama/ollama/pull/2584.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2584.patch",
"merged_at": null
} | Added a section for Setting environment variables on Windows | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2584/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7882 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7882/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7882/comments | https://api.github.com/repos/ollama/ollama/issues/7882/events | https://github.com/ollama/ollama/issues/7882 | 2,706,011,431 | I_kwDOJ0Z1Ps6hSnUn | 7,882 | Support AMD Radeon 890m and NPU on Ryzen AI PC | {
"login": "GiorCocc",
"id": 41432327,
"node_id": "MDQ6VXNlcjQxNDMyMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/41432327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GiorCocc",
"html_url": "https://github.com/GiorCocc",
"followers_url": "https://api.github.com/users/Gio... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-29T19:31:42 | 2024-12-02T15:40:49 | 2024-12-02T15:40:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please consider to add the support on AMD iGPU like Radeon 890m available on AMD Ryzen AI 9 HX 370 and NPU. Thanks | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7882/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/8634 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8634/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8634/comments | https://api.github.com/repos/ollama/ollama/issues/8634/events | https://github.com/ollama/ollama/issues/8634 | 2,815,768,303 | I_kwDOJ0Z1Ps6n1Tbv | 8,634 | Ollama is not installing on Termux | {
"login": "imvickykumar999",
"id": 50515418,
"node_id": "MDQ6VXNlcjUwNTE1NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/50515418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imvickykumar999",
"html_url": "https://github.com/imvickykumar999",
"followers_url": "https://api... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2025-01-28T14:01:22 | 2025-01-30T07:10:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ~ $ curl -fsSL https://ollama.com/install.sh | sh
```
>>> Installing ollama to /usr
No superuser binary detected.
Are you rooted?
``` | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8634/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1559 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1559/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1559/comments | https://api.github.com/repos/ollama/ollama/issues/1559/events | https://github.com/ollama/ollama/issues/1559 | 2,044,618,908 | I_kwDOJ0Z1Ps553mic | 1,559 | Pass in prompt as arguments is broken for multimodal models | {
"login": "elsatch",
"id": 653433,
"node_id": "MDQ6VXNlcjY1MzQzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsatch",
"html_url": "https://github.com/elsatch",
"followers_url": "https://api.github.com/users/elsatch/fo... | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 6 | 2023-12-16T05:41:21 | 2024-03-11T23:59:16 | 2024-03-11T23:59:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have been trying to evaluate the performance of the multimodal models passing the prompt as argument, as stated in the README.md section. Whenever I pass the image as an argument for ollama cli, it hallucinates the whole response. Asking about the same image using the regular chat, works without problem.
This is a... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1559/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4645 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4645/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4645/comments | https://api.github.com/repos/ollama/ollama/issues/4645/events | https://github.com/ollama/ollama/issues/4645 | 2,317,477,611 | I_kwDOJ0Z1Ps6KIebr | 4,645 | 3 GPUs, 2xNVIDIA and 1x AMD onboard - Can I force Python3 to use AMD, and Ollama to use 2xNVIDIA | {
"login": "HerroHK",
"id": 170845944,
"node_id": "U_kgDOCi7m-A",
"avatar_url": "https://avatars.githubusercontent.com/u/170845944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HerroHK",
"html_url": "https://github.com/HerroHK",
"followers_url": "https://api.github.com/users/HerroHK/foll... | [] | open | false | null | [] | null | 0 | 2024-05-26T04:37:37 | 2024-05-26T04:37:37 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | My setup runs:
1x Asus TUF GAMING B650-PLUS ATX with onboard AMD GPU
2x 16GD6 ASUS RTX4060Ti ProArt OC
When looking at nvidia-smi, I can see that one of the RTX4060 is dedicated to Ollama, and the other to python3 (Automatic1111).
I have tried the "export CUDA_VISIBLE_DEVICES=0,1 " command, but seems as long a... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4645/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2649 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2649/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2649/comments | https://api.github.com/repos/ollama/ollama/issues/2649/events | https://github.com/ollama/ollama/issues/2649 | 2,147,553,104 | I_kwDOJ0Z1Ps6AAQ9Q | 2,649 | Issue with new model Gemma | {
"login": "jaifar530",
"id": 31308766,
"node_id": "MDQ6VXNlcjMxMzA4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/31308766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaifar530",
"html_url": "https://github.com/jaifar530",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2024-02-21T19:46:40 | 2024-02-21T23:18:25 | 2024-02-21T23:18:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | After pulling the new Gemma model i got this issue, note that the issue is only with two grmma models, other works fine

| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2649/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2699 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2699/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2699/comments | https://api.github.com/repos/ollama/ollama/issues/2699/events | https://github.com/ollama/ollama/issues/2699 | 2,150,347,691 | I_kwDOJ0Z1Ps6AK7Or | 2,699 | Slow Response Time on Windows Prompt Compared to WSL | {
"login": "samer-alhalabi",
"id": 79296172,
"node_id": "MDQ6VXNlcjc5Mjk2MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/79296172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samer-alhalabi",
"html_url": "https://github.com/samer-alhalabi",
"followers_url": "https://api.gi... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-02-23T04:28:46 | 2024-07-08T04:13:06 | 2024-02-26T19:21:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When executing prompts on Ollama using Windows version, I experience considerable delays and slowness in response time. However, when running the exact same model and prompt via WSL, the response time is notably faster. Given that the Windows version of Ollama is currently in preview, I understand there may be optimiza... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2699/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7740 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7740/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7740/comments | https://api.github.com/repos/ollama/ollama/issues/7740/events | https://github.com/ollama/ollama/issues/7740 | 2,672,082,408 | I_kwDOJ0Z1Ps6fRL3o | 7,740 | Performance is decreasing | {
"login": "murzein",
"id": 148742019,
"node_id": "U_kgDOCN2fgw",
"avatar_url": "https://avatars.githubusercontent.com/u/148742019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/murzein",
"html_url": "https://github.com/murzein",
"followers_url": "https://api.github.com/users/murzein/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-11-19T12:58:14 | 2024-11-19T23:53:50 | 2024-11-19T23:53:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Load any model, for example gemma2:27b
OLLAMA_KEEP_ALIVE: -1
command: **ollama run gemma2:27b --verbose**
message: **tell me about amd**
**1 iteration**
total duration: 12.9263992s
load duration: 32.8561ms
prompt eval count: **13 token(s)** <----
prompt eval duration:... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7740/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7107 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7107/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7107/comments | https://api.github.com/repos/ollama/ollama/issues/7107/events | https://github.com/ollama/ollama/issues/7107 | 2,568,601,801 | I_kwDOJ0Z1Ps6ZGcDJ | 7,107 | Adrenalin Edition 24.9.1/24.10.1 slow ollama performance | {
"login": "skarabaraks",
"id": 2365359,
"node_id": "MDQ6VXNlcjIzNjUzNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2365359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skarabaraks",
"html_url": "https://github.com/skarabaraks",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 15 | 2024-10-06T11:07:21 | 2025-01-28T23:47:14 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Both Adrenalin Edition drivers (24.9.1 and 24.10.1) significantly slows windows performance. GPU acceleration appears disabled.
No issues with ollama on Adrenalin 24.8.1 (slightly older driver)
My system
Windows 11 24H2
GPU: RX6800 XT
CPU: Ryzen 5900XT
32GB RAM
Ollama version: Lates... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7107/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7107/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2946/comments | https://api.github.com/repos/ollama/ollama/issues/2946/events | https://github.com/ollama/ollama/pull/2946 | 2,170,521,445 | PR_kwDOJ0Z1Ps5oyXxi | 2,946 | Add Community Integration: OpenAOE | {
"login": "fly2tomato",
"id": 11885550,
"node_id": "MDQ6VXNlcjExODg1NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11885550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fly2tomato",
"html_url": "https://github.com/fly2tomato",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2024-03-06T02:52:01 | 2024-03-25T18:57:40 | 2024-03-25T18:57:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2946",
"html_url": "https://github.com/ollama/ollama/pull/2946",
"diff_url": "https://github.com/ollama/ollama/pull/2946.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2946.patch",
"merged_at": "2024-03-25T18:57:40"
} | [OpenAOE](https://github.com/InternLM/OpenAOE) is a LLM Group Chat Framework: chat with multiple LLMs at the same time.
Now the mistral-7b and gemma-7b models are added into OpenAOE under Ollama inference.

 | {
"login": "TianWuYuJiangHenShou",
"id": 20592000,
"node_id": "MDQ6VXNlcjIwNTkyMDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20592000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TianWuYuJiangHenShou",
"html_url": "https://github.com/TianWuYuJiangHenShou",
"followers_url... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-10-29T03:30:19 | 2024-10-30T07:13:53 | 2024-10-29T18:45:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
time=2024-10-29T03:23:31.503Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_n... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7408/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/253 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/253/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/253/comments | https://api.github.com/repos/ollama/ollama/issues/253/events | https://github.com/ollama/ollama/pull/253 | 1,831,879,303 | PR_kwDOJ0Z1Ps5W8PxK | 253 | use a pipe to push to registry with progress | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-08-01T19:17:50 | 2023-08-03T19:11:24 | 2023-08-03T19:11:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/253",
"html_url": "https://github.com/ollama/ollama/pull/253",
"diff_url": "https://github.com/ollama/ollama/pull/253.diff",
"patch_url": "https://github.com/ollama/ollama/pull/253.patch",
"merged_at": "2023-08-03T19:11:23"
} | switch to a monolithic upload instead of a chunk upload through a pipe to report progress | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/253/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8529 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8529/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8529/comments | https://api.github.com/repos/ollama/ollama/issues/8529/events | https://github.com/ollama/ollama/issues/8529 | 2,803,321,089 | I_kwDOJ0Z1Ps6nF0kB | 8,529 | feat: OpenAI reasoning_content compatibility | {
"login": "EntropyYue",
"id": 164553692,
"node_id": "U_kgDOCc7j3A",
"avatar_url": "https://avatars.githubusercontent.com/u/164553692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EntropyYue",
"html_url": "https://github.com/EntropyYue",
"followers_url": "https://api.github.com/users/Ent... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2025-01-22T04:09:25 | 2025-01-22T07:19:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The current thinking model outputs XML tags to distinguish between thinking and answering, and we need a new feature to make the `reasoning_content` of the OpenAI SDK work normally | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8529/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8529/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7675 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7675/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7675/comments | https://api.github.com/repos/ollama/ollama/issues/7675/events | https://github.com/ollama/ollama/pull/7675 | 2,660,440,586 | PR_kwDOJ0Z1Ps6B_LKa | 7,675 | runner.go: Increase survivability of main processing loop | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-11-15T00:49:04 | 2024-11-15T01:18:42 | 2024-11-15T01:18:41 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7675",
"html_url": "https://github.com/ollama/ollama/pull/7675",
"diff_url": "https://github.com/ollama/ollama/pull/7675.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7675.patch",
"merged_at": "2024-11-15T01:18:41"
} | Currently, if an error occurs during the prep stages (such as tokenizing) of a single request, it will only affect that request. However, if an error happens during decoding, it can take down the entire runner.
Instead, it's better to drop the tokens that triggered the error and try to keep going. However, we also n... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7675/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7675/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2661 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2661/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2661/comments | https://api.github.com/repos/ollama/ollama/issues/2661/events | https://github.com/ollama/ollama/issues/2661 | 2,148,212,465 | I_kwDOJ0Z1Ps6ACx7x | 2,661 | Migrating models from WSL2 to Native Windows | {
"login": "circbuf255",
"id": 139676887,
"node_id": "U_kgDOCFNM1w",
"avatar_url": "https://avatars.githubusercontent.com/u/139676887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/circbuf255",
"html_url": "https://github.com/circbuf255",
"followers_url": "https://api.github.com/users/cir... | [] | closed | false | null | [] | null | 2 | 2024-02-22T05:08:17 | 2024-05-21T23:30:48 | 2024-02-22T08:17:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **What is the correct workflow to migrate from WSL2 to Native Windows?** Migrating models (blobs/manifests) from WSL2 to Windows does not seem to work as expected. For those with hundreds of GB already downloaded in WSL2, there should be a method to move those to native Windows. The method I tried that does not work... | {
"login": "circbuf255",
"id": 139676887,
"node_id": "U_kgDOCFNM1w",
"avatar_url": "https://avatars.githubusercontent.com/u/139676887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/circbuf255",
"html_url": "https://github.com/circbuf255",
"followers_url": "https://api.github.com/users/cir... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2661/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4899 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4899/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4899/comments | https://api.github.com/repos/ollama/ollama/issues/4899/events | https://github.com/ollama/ollama/issues/4899 | 2,339,745,576 | I_kwDOJ0Z1Ps6Lda8o | 4,899 | Failed to get max tokens for LLM with name qwen2:7b-instruct-fp16 with ollama | {
"login": "wenlong1234",
"id": 106233935,
"node_id": "U_kgDOBlUATw",
"avatar_url": "https://avatars.githubusercontent.com/u/106233935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wenlong1234",
"html_url": "https://github.com/wenlong1234",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-07T07:05:29 | 2024-06-11T01:41:50 | 2024-06-09T17:24:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
utils.py 328 : Failed to get max tokens for LLM with name qwen2:7b-instruct-fp16. Defaulting to 4096.
api_server-1 | Traceback (most recent call last):
api_server-1 | File "/app/danswer/llm/utils.py", line 318, in get_llm_max_... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4899/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2806 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2806/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2806/comments | https://api.github.com/repos/ollama/ollama/issues/2806/events | https://github.com/ollama/ollama/issues/2806 | 2,158,720,761 | I_kwDOJ0Z1Ps6Aq3b5 | 2,806 | How do we publicly show our model in the library? | {
"login": "WangRongsheng",
"id": 55651568,
"node_id": "MDQ6VXNlcjU1NjUxNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/55651568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WangRongsheng",
"html_url": "https://github.com/WangRongsheng",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2024-02-28T11:17:14 | 2024-02-28T14:37:35 | 2024-02-28T14:37:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | We created a model - [Aurora](https://ollama.com/wangrongsheng/aurora), which is a fine-tuned Mixtral-8X7B model with bilingual (chinese and english) dialog. But we can't search for it in the library.

| {
"login": "WangRongsheng",
"id": 55651568,
"node_id": "MDQ6VXNlcjU1NjUxNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/55651568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WangRongsheng",
"html_url": "https://github.com/WangRongsheng",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2806/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7552 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7552/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7552/comments | https://api.github.com/repos/ollama/ollama/issues/7552/events | https://github.com/ollama/ollama/issues/7552 | 2,640,548,513 | I_kwDOJ0Z1Ps6dY5Kh | 7,552 | How to modify n_ctx | {
"login": "wuhongsheng",
"id": 10295461,
"node_id": "MDQ6VXNlcjEwMjk1NDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10295461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuhongsheng",
"html_url": "https://github.com/wuhongsheng",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-11-07T10:21:49 | 2024-11-08T02:12:16 | 2024-11-08T02:12:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
How to modify n_ctx
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4 | {
"login": "wuhongsheng",
"id": 10295461,
"node_id": "MDQ6VXNlcjEwMjk1NDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10295461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuhongsheng",
"html_url": "https://github.com/wuhongsheng",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7552/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3634 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3634/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3634/comments | https://api.github.com/repos/ollama/ollama/issues/3634/events | https://github.com/ollama/ollama/pull/3634 | 2,241,828,741 | PR_kwDOJ0Z1Ps5slIuz | 3,634 | Update README.md | {
"login": "rapidarchitect",
"id": 126218667,
"node_id": "U_kgDOB4Xxqw",
"avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rapidarchitect",
"html_url": "https://github.com/rapidarchitect",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2024-04-14T00:06:48 | 2024-11-21T08:26:05 | 2024-11-21T08:26:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3634",
"html_url": "https://github.com/ollama/ollama/pull/3634",
"diff_url": "https://github.com/ollama/ollama/pull/3634.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3634.patch",
"merged_at": null
} | added Taipy Ollama interface to the bottom of web and desktop interfaces | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3634/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4105 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4105/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4105/comments | https://api.github.com/repos/ollama/ollama/issues/4105/events | https://github.com/ollama/ollama/issues/4105 | 2,276,400,905 | I_kwDOJ0Z1Ps6Hrx8J | 4,105 | Confusing error on linux with noexec on /tmp - Error: llama runner process no longer running: 1 | {
"login": "utility-aagrawal",
"id": 140737044,
"node_id": "U_kgDOCGN6FA",
"avatar_url": "https://avatars.githubusercontent.com/u/140737044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utility-aagrawal",
"html_url": "https://github.com/utility-aagrawal",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 14 | 2024-05-02T20:13:48 | 2024-05-08T21:10:15 | 2024-05-08T21:10:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I installed ollama on my ubuntu 22.04 machine using the command curl -fsSL https://ollama.com/install.sh | sh
I ran : ollama run llama3 and got this error:
Error: llama runner process no longer running: 1
Can someone help me resolve it?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
##... | {
"login": "utility-aagrawal",
"id": 140737044,
"node_id": "U_kgDOCGN6FA",
"avatar_url": "https://avatars.githubusercontent.com/u/140737044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utility-aagrawal",
"html_url": "https://github.com/utility-aagrawal",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4105/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1862 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1862/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1862/comments | https://api.github.com/repos/ollama/ollama/issues/1862/events | https://github.com/ollama/ollama/issues/1862 | 2,071,706,623 | I_kwDOJ0Z1Ps57e7v_ | 1,862 | Awq mod support/awq-gguf | {
"login": "sviknesh97",
"id": 100233528,
"node_id": "U_kgDOBflxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/100233528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sviknesh97",
"html_url": "https://github.com/sviknesh97",
"followers_url": "https://api.github.com/users/svi... | [] | closed | false | null | [] | null | 3 | 2024-01-09T06:23:17 | 2024-05-10T01:00:20 | 2024-05-10T01:00:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Does ollama support awq formats instead of gguf the gguf inference seems to be alittle slow hence thinking about awq and if it doesnt support is there a way to convert awq to gguf | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1862/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2222 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2222/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2222/comments | https://api.github.com/repos/ollama/ollama/issues/2222/events | https://github.com/ollama/ollama/issues/2222 | 2,103,082,752 | I_kwDOJ0Z1Ps59Wn8A | 2,222 | Add LLaVAR model | {
"login": "ryx",
"id": 476417,
"node_id": "MDQ6VXNlcjQ3NjQxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/476417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryx",
"html_url": "https://github.com/ryx",
"followers_url": "https://api.github.com/users/ryx/followers",
"fol... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-01-27T00:09:54 | 2024-01-27T00:13:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please add the LLaVAR model (from https://llavar.github.io/) to ollama. The model is based on llava and offers enhanced text recognition abilities, which seems like a great addition to the current range of ollama models. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2222/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2222/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8311 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8311/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8311/comments | https://api.github.com/repos/ollama/ollama/issues/8311/events | https://github.com/ollama/ollama/issues/8311 | 2,769,441,773 | I_kwDOJ0Z1Ps6lElPt | 8,311 | OpenSSL/3.0.13: error:0A00010B:SSL routines::wrong version number | {
"login": "dvdgamer08",
"id": 134729003,
"node_id": "U_kgDOCAfNKw",
"avatar_url": "https://avatars.githubusercontent.com/u/134729003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvdgamer08",
"html_url": "https://github.com/dvdgamer08",
"followers_url": "https://api.github.com/users/dvd... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-05T20:43:11 | 2025-01-05T23:36:35 | 2025-01-05T23:36:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've set up SSL for Ollama following these steps:
### Configuring SSL
1. Obtain an SSL certificate and a private key. You can acquire these from a Certificate Authority (CA) or generate a self-signed certificate (not recommended for production use).
2. Place the certificate and key files ... | {
"login": "dvdgamer08",
"id": 134729003,
"node_id": "U_kgDOCAfNKw",
"avatar_url": "https://avatars.githubusercontent.com/u/134729003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvdgamer08",
"html_url": "https://github.com/dvdgamer08",
"followers_url": "https://api.github.com/users/dvd... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8311/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.