Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.1.0 #8

Merged
merged 22 commits into from
Mar 5, 2025
Merged

0.1.0 #8

merged 22 commits into from
Mar 5, 2025

Conversation

github-actions[bot]
Copy link

@github-actions github-actions bot commented Mar 4, 2025

Changelog

0.0.2a10 (2025-03-03)

Full Changelog

Merged pull requests:

0.0.2a9 (2025-01-17)

Full Changelog

Merged pull requests:

  • Update Docker configuration and async consumer init #6 (NeonDaniel)

0.0.2a8 (2025-01-17)

Full Changelog

Merged pull requests:

0.0.2a7 (2024-04-04)

Full Changelog

Merged pull requests:

0.0.2a6 (2024-03-25)

Full Changelog

Merged pull requests:

0.0.2a5 (2024-01-17)

Full Changelog

Merged pull requests:

0.0.2a3 (2023-11-16)

Full Changelog

0.0.2a2 (2023-10-23)

Full Changelog

0.0.2a1 (2023-10-19)

Full Changelog

0.0.1 (2023-06-30)

Full Changelog

0.0.1a4 (2023-06-30)

Full Changelog

0.0.1a3 (2023-06-06)

Full Changelog

0.0.1a2 (2023-05-16)

Full Changelog

* This Changelog was automatically generated by github_changelog_generator

NeonBohdan and others added 22 commits October 19, 2023 19:31
* Install llm-core

* Use core in RMQ

* Use config.py as core duplicate

* Use core in model

* Updated diana config

* Updated readme

* Fix embeddings output type

* Fix model variable duplicate

* Added num_parallel_processes parameter

* Fix Dict type def

* Init all abstract methods

* Fix _score call

* chatgpt -> chat_gpt

* Use pass instead of None
* Updated requirements

* Added persona to llm

* Removed num_parallel_processes from LLM class
* Remove json config as deprecated

* Updated install files

* Updated readme

* Updated requirements

* Updated main and rmq

* Updated model file name

* Updated folder name

* Updated model

* Added API key init

* Fixed private var use

* Use latest version of models

* context_depth only even
* Updated install files

* Updated readme

* Updated main and rmq

* Updated model file name

* Updated folder name

* Updated model

* Updated requirements

* Use gemini

* Added model field

* Added system_prompt wrapper

* Update to support Submind participation (#2)

* Resovle import error
Update dependencies for chatbotsforum compat.
Update license tests

* Use stable embeddings

---------

Co-authored-by: Daniel McKnight <daniel@neon.ai>
Co-authored-by: NeonBohdan <bohdan@neon.ai>

* Fix instruction loss of empty history

---------

Co-authored-by: Daniel McKnight <34697904+NeonDaniel@users.noreply.github.com>
Co-authored-by: Daniel McKnight <daniel@neon.ai>
* Add Docker manual release action
Update license notices to 2025

* Whitelist MIT-licensed `attrs` package that is failing license tests
…e logging config (#6)

Initialize log and prevent joining thread for async consumer support
Update Dockerfile to set default config path
* Update neon-llm-core dependency to stable version

* Whitelist `RapidFuzz` package in license tests as it carries an MIT license https://pypi.org/project/RapidFuzz/
@NeonDaniel NeonDaniel closed this Mar 5, 2025
@NeonDaniel NeonDaniel reopened this Mar 5, 2025
@NeonDaniel NeonDaniel merged commit 4b6cd71 into master Mar 5, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants