* Add multiple images support
* Add multiple images support in gui
* Support multiple images in legacy client and in the api
Fix some model names in provider model list
* Fix unittests
* Add vision and providers docs
* Improve slim docker image example, clean up OpenaiChat provider
* Enhance event loop management for asynchronous generators
* Fix attribute " shutdown_default_executor" not found in old python versions
* asyncio file created with all async helpers
* Add speech synthesize from Gemini. You can use it without a account
* Improve slim docker image example, clean up OpenaiChat provider
* Enhance event loop management for asynchronous generators
* Fix attribute " shutdown_default_executor" not found in old python versions
* asyncio file added with all async helpers
* Improve download of generated images, serve images in the api
* Add support for conversation handling in the api
* Add orginal prompt to image response
* Add download images option in gui, fix loading model list in Airforce
* Support speech synthesize in Openai generator
* Improve download of generated images, serve images in the api
Add support for conversation handling in the api
* Add orginal prompt to image response
* Add download images option in gui, fix loading model list in Airforce
* Add download images option in gui, fix loading model list in Airforce
Add AbstractProvider class
Add ProviderType type
Add get_last_provider function
Add version module and VersionUtils
Display used provider in gui
Fix error response in api
* Update backend.py
change to the model that received from user interactive from the web interface model selection.
* Update index.html
added Llama2 as a provider selection and also include the model selection for Llama2: llama2-70b, llama2-13b, llama2-7b
* Update requirements.txt
add asgiref to enable async for Flask in api.
"RuntimeError: Install Flask with the 'async' extra in order to use async views"