Vietnam.vn - Nền tảng quảng bá Việt Nam

The application of Vietnamese AI model is attracting millions of visits

Large Language Model (LLM) developed by Vietnamese people deploys a series of useful applications to serve users' information search and communication needs, recording millions of visits.

ZNewsZNews30/06/2025

The application of Vietnamese AI model is attracting millions of visits

Developed by Zalo's Vietnamese engineering team, the large language model with 13 billion parameters is bringing a series of practical applications that attract a large number of users to visit and use every month. Notably, the Kiki Info Q&A Assistant is being operated as an OA - Official Account on Zalo, providing a set of 3 features including Q&A on different topics such as science, history, traffic laws...; Content creation such as writing, composing emails, posting on social networks...; and Entertainment by giving suggestions on tourist destinations, music , books...

Statistics from the development team show that this assistant has had up to 1 million users accessing its OA account on Zalo in just under 2 months.

Zalo anh 1

Kiki Info integrated on Zalo messaging platform.

Another application also developed by Zalo's large language model is AI cards, which have now reached 15 million cards created and sent by users. This is data compiled within 2 months, users use AI cards to send wishes to relatives and friends on important holidays.

The applications are highly appreciated by users for their smart experience, helping to shorten the time spent searching and making decisions in life. At the same time, it helps make connecting with relationships more interesting and engaging.

Zalo's LLM model is developed using a training-from-scratch technique - implementing all processes from parameter initialization, deciding on model architecture to training algorithms on a given dataset, helping Vietnamese people to fully master and control the training process as well as the model.

At the end of 2024, Zalo's LLM model finished in the Top 2 position on the VMLU ranking - Vietnamese Multitask Language Understanding Benchmark Suite for Large Language Models (LLMs) - a platform for assessing and ranking Vietnamese language proficiency. Specifically, Zalo's large language model rose to the No. 2 position on the ranking of models trained from scratch, just behind Meta's Llama-3-70B, officially surpassing big names such as GPT-4 (OpenAI), gemma-2-9b-it (Google), microsoft/Phi-3-small-128k-instruct (Microsoft).

Zalo anh 2

2024 Ranking of LLMs Built from Scratch with Zalo's LLM Model in Top 2.

This is a great success for a large language model developed by Vietnamese people, especially when it encountered many limitations in the early days of development. While large companies in the world owned thousands of the latest GPUs from Nvidia, in Vietnam, at that time, engineers were not fully equipped with the necessary server infrastructure.

At the same time, Vietnamese is also ranked in the group with data resources dozens of times poorer than English or Chinese. Besides, Vietnam also has limitations in human resources and LLM training experience when compared to developed countries in the world.

Zalo has a development strategy to overcome the limitations of the training environment compared to the world. By equipping the computing infrastructure with 8 DGX H100 servers, the LLM model was developed directly using the newest and most scarce GPU line of Nvidia at that time with a performance of up to 256 petaFLOPS (FLoating-point Operations Per Second - one petaFLOP is equivalent to 10 million billion calculations/second).

Zalo anh 3

Zalo's server system has superior processing capacity.

At the same time, quality training data is also invested in to compensate for the lack of Vietnamese data sources. Through a series of studies conducted on small civilian GPUs, Zalo engineers have also taken advantage of the opportunity to equip themselves with the knowledge and capacity to train LLM to create a ready platform as soon as they own a large computing infrastructure.

The right development orientation has helped Zalo successfully develop a large language model with 7 billion parameters focusing on Vietnamese after only 6 months of training in 2023, achieving 150% of the capacity compared to OpenAI's GPT3.5 on the VMLU Benchmark. And now, it has surpassed a series of global names on the VMLU 2024 rankings, while simultaneously putting the research model into practical deployment for the community.

Zalo anh 4

Zalo's LLM model competes with a series of models in the world such as ChatGPT 3.5, ChatGPT 4.0, Llama, PhoGPT and a real player in the first launch in 2023.

According to Zalo, the LLM model will continue to be invested in training to bring more world-class AI applications to users. Thereby, mastering advanced AI technology, aiming for the era of technological development of the country with a breakthrough orientation in science , technology, innovation and national digital transformation.

Source: https://znews.vn/ung-dung-cua-mo-hinh-ai-viet-dang-thu-hut-hang-trieu-luot-truy-cap-post1563330.html


Comment (0)

No data
No data

Heritage

Figure

Enterprise

No videos available

News

Political System

Destination

Product