Tether launches QVAC AI framework for smartphones and GPUs, enabling local model training, faster performance, lower costs, and better privacy without cloud servers.

Tether Introduces AI Framework For Consumer GPUs and Smartphones

Tether launches QVAC AI framework for smartphones and GPUs, enabling local model training, faster performance, lower costs, and better privacy without cloud servers.

Tether released a new artificial intelligence framework called QVAC Fabric LLM on March 17, 2026. The new system allows users to train AI models on normal devices like smartphones, laptops, and home computers. The company said that this technology eliminates the need for expensive cloud servers and special hardware. As a result, an increasing number of people have the ability to build and use AI tools.

New AI Framework Runs on Phones, Laptops, and Home GPUs

Tether designed the QVAC Fabric LLM framework for the everyday device. In the past, large AI models have to be trained using expensive Nvidia servers or cloud systems. Because of this, only large companies could construct advanced AI. However, the new framework makes it possible to run models on consumer hardware.

The system has many kinds of chip and graphics card supports. It supports Intel, AMD, Apple Silicon, and mobile GPU. Therefore, users do not require any special equipment to train or test AI models. This makes development easier for students, developers and small teams.

Related Reading: Stablecoin News:: Tether Targets U.S. Market With New Stablecoin USAT and $500B Valuation Ambitions – Ledger Tribune

The framework is based on a technology known as BitNet, which decreases memory consumption. And because of this design, models require less power and storage. Tests showed that use of the memory can fall by more than 70 percent. As a result, they can have bigger models of AI on smaller devices.

Tether also demonstrated that AI models can be made to run at a faster speed on mobile GPUs. In many tests the speed of GPU was two to eleven times faster than the CPU speed. This means phones and laptops can now handle tasks that once needed data centers.

The company said that a 1 billion parameter model can be trained in about a hour. This test was performed on phones such as Samsung s25 and iPhone 16. Even larger versions were tested on these devices. In one instance, 13 billion parameters ran on a smartphone.

Local AI Training Improves Privacy and Reduces Costs

One feature of the framework that stands out is local processing. All training and testing can take place on the device itself. Because of this, there is no need to send data to the cloud servers from the users. This aids in the protection of private information and minimizes the risks that can arise with security.

The system also operates without the internet connection. Therefore, people can use AI tools offline if they are needed. This feature can be of help in the field of research, education, and medical work. Many users prefer the local systems because they keep the data safe.

Tether said the framework also supports fine-tuning of LoRA. This method allows users to tailor AI models in a fast time. Developers can also train models for special tasks without having to create a new system. Because of this, projects can be completed at a faster rate.

The framework also has support for many popular AI models. It works with modern systems like Llama 3, Qwen 3, and Gemma 3. In addition, the company published the tools for the developers on the public platforms. These tools facilitate the initiation of new AI projects.

The project is open source (Apache 2.0 license). This means that anyone can use and make changes to the software. Developers around the world can make improvements to the system. As a result, the technology may rapidly increase.

Tether Wants AI to Become Decentralized and Open

Tether said the goal of this project is to make AI accessible to everybody. CEO Paolo Ardoino explained that advanced AI should not remain in the hands of only big companies. He said too much reliance on cloud providers can slow down innovation. Therefore, the company wants AI to be run locally on personal devices.

The new framework could also be used to support federated learning in the future. This method enables the training of models by a large number of devices. Each device has its own data and it keeps them private, but shares the updates. Because of this, large AI systems are able to grow without central servers.

Tether has plans to invest more money towards this technology in the years to come. The company thinks it will be important for AI to become a part of everyday life. Faster and less costly tools may be useful to education, science and business.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top