Ollama is a platform designed to manage and run Large Language Models (LLMs) locally on devices. It provides a simple way to download, run and use these artificial intelligence models without the need for a cloud connection.
16 GPU system (11-48GB)
Ollama execution server
Mondragón Goi Eskola Politeknikoa JMA SCoop
Contact person: Aitor Aguirre Ortuzar
Let us get to know you better. If you are looking to implement intelligent technologies and advanced materials that improve the efficiency of your company's production system to offer solutions with more added value, fill in this form.