Artifical Intelligence¶
We have a dedicated AI channel where we’ve curated all of our langflow, langchain, ollama, and a bunch of other frameworks and vendor client SDKs.
We have also packaged MCP servers for machine images we ship in cloud marketplaces, and the next release of each of these will be AI-aware.
We are working with our cloud vendor’s regarding what our AI appliance(s) will look like.
We are still trying to determine if a GPU-heavy machine dedicated to running LLM’s is something we can differentiate ourselves with - as a distro vendor it’s trivial to set up drivers and accelerators and the AI stack(s) that can take advantage of this at the device level; but unless the workload profile is requiring this capacity much of the time, it still probably more cost-effective to offload to a dedicated LLM provider.
The CPU-only alternative is appealing as in addition, this represents most users and developers reality - of a desktop/laptop with insufficient resources to do local AI, and configurations to subscribe to the various vendor offerings.
We are looking towards something like Llama Stack where both ourselves and clients can publish executors/agent scripts which will perform specific AI tasks coordinating against our marketplace offerings in your account, and AI vendor mix of your choice. We will/can create and manage scripts that coordinate with our stuff, and you can do the same against all of your other resources.