Thanks ollama.
Ollama is really great. The simplicity of it, the easy use via REST API, the fun CLI...
What a fun program.
Is there somewhere we can follow for updates?
I will likely post on here when I release the plugin to GitLab and the AssetLib.
But I also don't want to spam this community, so there won't be many, if any updates until the actual release.
If you want to have something similar right now, there is Fuku for the chat interaction and selfhosted copilot for code completion on the AssetLib! I can't get the code completion one to work, but Fuku works pretty well, but can't read the users code at all.
I will upload the files to my GitLab soon though.
EDIT: Updates the gitlab link to actually point to my gitlab page
Just fixed the problem where it inserts too many lines after completing code.
This issue can be seen in the first demo video with the vector example. There are two newlines added for no reason. That's fixed now:
I've been looking for a plugin like this on and off for a few weeks now. Imo the chat feature is overrated, I just want the auto complete
Currently the completion is implemented via keyboard shortcut.
Would you prefer it, if I made it automatically complete the code? I personally feel, that intentionally asking for it to complete the code is more natural than waiting for it to do so.
Are there some other features you would like to see? I am currently working on a function-refactoring UI.
Completed via a keyboard shortcut is perfect.
As far as other features I want; I don't want any. I just want code completion via keyboard shortcut.
I think a hard aspect is figuring out what context to feed the LLM. Iirc GitHub Copilot only feeds what is in the current file, above the cursor, but I think feeding the whole file + other open code tabs would be super useful.
You are right in that it can be useful to feed in all of the contents in other related files.
However!
LLMs take a really long time before writing anything with a large context input. the fact that githubs copilot can generate code so quickly even though it has to keep the entire code file in context is a miracle to me.
Including all related or opened GDScript files would be way too much for most models and it would likely take about 20 seconds for it to actually start generate some code (also called first token lag
). So I will likely only implement the current file into the context window, as that might already take some time. Remember, we are running local LLMs here, so not everyone has a blazingly fast GPU or CPU (I use a GTX1060 6GB for instance).
Example
I just tried it and it took a good 10 seconds for it to complete some 111 line code without any other context using this pretty small model and then about 6 seconds for it to write about 5 lines of comment documentation (on my CPU). It takes about 1 second with a very short script.
You can try this yourself using something like HuggingChat to test out a big context window model like Command R+ and fill its context windw with some really really long string (copy paste it a bunch times) and see how it takes longer to respond. For me, it's the difference between one second and 13 seconds!
I am thinking about embedding
either the current working file, or maybe some other opened files though, to get the most important functions out of the script to keep context length short. This way we can shorten this first token delay
a bit.
This is a completely different story with hosted LLMs, as they tend to have blazingly quick first token delays
, which makes the wait trivial.
Makes sense. Glad anything will exist at all though!
best offline llm for godot? qwen2.5 database is looking updated
I used the 1.5 B model of the qwen2.5 family for code generation in the example. It works fine, but sometimes it forgets that it's writing code, exits the markdown code block and starts writing an explanation...
Godot
Welcome to the programming.dev Godot community!
This is a place where you can discuss about anything relating to the Godot game engine. Feel free to ask questions, post tutorials, show off your godot game, etc.
Make sure to follow the Godot CoC while chatting
We have a matrix room that can be used for chatting with other members of the community here
Links
Other Communities
- !inat@programming.dev
- !play_my_game@programming.dev
- !destroy_my_game@programming.dev
- !voxel_dev@programming.dev
- !roguelikedev@programming.dev
- !game_design@programming.dev
- !gamedev@programming.dev
Rules
- Posts need to be in english
- Posts with explicit content must be tagged with nsfw
- We do not condone harassment inside the community as well as trolling or equivalent behaviour
- Do not post illegal materials or post things encouraging actions such as pirating games
We have a four strike system in this community where you get warned the first time you break a rule, then given a week ban, then given a year ban, then a permanent ban. Certain actions may bypass this and go straight to permanent ban if severe enough and done with malicious intent
Wormhole
Credits
- The icon is a modified version of the official godot engine logo (changing the colors to a gradient and black background)
- The banner is from Godot Design