NAME Ollama zerbitzaria

DESCRIPTION

Ollama gailu lokaletan hizkuntza-modelo handiak (Large Language Models edo LLM) kudeatu eta exekutatzeko diseinatutako plataforma bat da. Adimen artifizialeko modelo hauek hodeirako konexiorik gabe deskargatu, exekutatu eta erabiltzeko modu erraza eskaintzen du.

FIELDS OF APPLICATION

Deployment and Application

Developments

Infrastructure and Computing

MOST OUTSTANDING EQUIPMENT AND COMPONENTS

  • GPU server

    16 GPU system (11-48GB)

SERVICES OFFERED BY THE ASSET

Ollama execution server

Ollama is a platform designed to manage and run Large Language Models (LLMs) locally on devices. It provides a simple way to download, run, and use these artificial intelligence models without the need for cloud connectivity. Ollama provides an infrastructure that allows running advanced models directly on local machines, optimizing resources and reducing latency by not depending on external servers. MGEP makes available to companies a server equipped with high-performance graphics cards that hosts Ollama, ideal for running advanced models or LLMs on these servers. This resource can be used to train and deploy language models, complex data analysis, and machine learning applications, offering companies an optimized infrastructure for AI projects without depending on the cloud.

ENTITY MANAGING THE ASSET

Mondragón Goi Eskola Politeknikoa JMA SCoop
Contact person:
Aitor Aguirre Ortuzar
aaguirre@mondragon.edu