SelenaCore Update: Automated Local Engine Orchestration and Enhanced UI Feedback

I am excited to announce a significant overhaul of how SelenaCore manages local AI infrastructure. A major focus of this update is the automated lifecycle management of local servers. With commit f16af95, the system now automatically starts and stops local engines when switching providers. This is complemented by a non-blocking switching logic fd2d769, ensuring the UI remains responsive while backend services initialize.
On the containerization front, we've refactored the Docker setup to support native Ollama and Piper integration, while cleaning up legacy GPU configurations 9c37a09. For better visibility, we have introduced progress bars for all engine installation and model download operations 72f4caa, including asynchronous model pulling for Ollama 9dd23bc.
Technical refinements include transitioning Llama.cpp to the /v1/completions endpoint 4b33915 and improving health checks via the models endpoint f87e6a0. We have also rewritten system prompts to ensure local model compliance and added editable TTS rules dc7cb47.