2025 marks a turning point for artificial intelligence in finance. After a year of conversations dominated by consumer chatbots, the talk has now shifted to whether industrial-grade AI can withstand regulatory scrutiny, operate within risk frameworks, and survive a compliance audit. This shift is captured clearly in the HKMA GenAI Sandbox report, developed with Cyberport, which [...] The post Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals appeared first on Fintech Hong Kong.2025 marks a turning point for artificial intelligence in finance. After a year of conversations dominated by consumer chatbots, the talk has now shifted to whether industrial-grade AI can withstand regulatory scrutiny, operate within risk frameworks, and survive a compliance audit. This shift is captured clearly in the HKMA GenAI Sandbox report, developed with Cyberport, which [...] The post Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals appeared first on Fintech Hong Kong.

Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals

2025 marks a turning point for artificial intelligence in finance. After a year of conversations dominated by consumer chatbots, the talk has now shifted to whether industrial-grade AI can withstand regulatory scrutiny, operate within risk frameworks, and survive a compliance audit.

This shift is captured clearly in the HKMA GenAI Sandbox report, developed with Cyberport, which offers possibly one of the most structured and transparent examinations of how banks are pushing to deploy GenAI today.

Rather than asking what a model can generate, it focuses on whether a model can be trusted, validated, governed, and deployed safely across high-stakes workflows.

The HKMA GenAI sandbox experimented with GenAI capabilities across three critical domains: risk management, anti-fraud, and customer experience.

HKMA GenAI reportSource: HKMA

Key Considerations in Evaluating GenAI Sandbox Applications

To determine which proposals made the cut in the HKMA GenAI sandbox, four strategic factors guided the prioritisation process.

hkma genai sandbox reportSource: HKMA

First, evaluators looked for a high level of innovation, specifically targeting solutions that introduced novel ideas or methodologies capable of unlocking new digitalisation opportunities.

Next, this was balanced against the complexity of the solutions, ensuring that proposals demonstrated the technical sophistication and intricacy necessary to drive genuine advancement and add value.

Proposals needed to show a significant expected contribution to the industry, like addressing sector-wide challenges through applications that are replicable and scalable.

Finally, the selection process required strict adherence to the principle of fair use. This ensured that the sandbox’s pooled computing resources would be used in a responsible, efficient, and equitable manner.

Since the sandbox operates on shared infrastructure, this particular criterion assesses whether the solution is engineered to be efficient, ensuring it doesn’t monopolise computing power at the expense of other participants.

How Data Strategy and Preparation Affect GenAI Model Development

It goes without saying that the performance and reliability of GenAI solutions depend on the data they are trained or fine-tuned on.
Testing data are equally critical for rigorously testing the accuracy and contextual relevance of the model’s outputs.
It often can consume as much, if not more, time and resources than the fine-tuning of the model solution itself.
genai data strategy and preparationSource: HKMA

Data Collection

To ensure datasets were inclusive and dependable, HKMA GenAI sandbox participants used a mix of independent public and proprietary sources. These datasets were curated to cover critical dimensions such as product lines, languages, and customer personas.

This approach captured the website’s information completely without human intervention, and it preserved a highly accurate copy of the site’s content.

Data Pre-processing

Once datasets were collected, they required pre-processing before being used to improve data quality and protect privacy.

To maintain consistency and support efficient fine-tuning, banks carried out several data-cleansing and standardisation steps. A key objective was to remove irrelevant “noise” from text inputs and ensure more uniform model outputs.

Banks also implemented several techniques to remove or obscure sensitive information. This included data masking, tokenisation in a data-security context, synthetic data generation, and pseudonymisation.

how to generate synthetic data genaiSource: HKMA

Beyond structured data, banks also worked with large amounts of unstructured content. Document pre-processing remains important, even for multimodal LLMs.

As LLMs also have a limited context window, long documents must be split into smaller, logically structured segments. Data chunking helped in preserving context, according to the report.

Data Augmentation

To reduce the risk of overfitting and minimise hallucinations arising from limited training samples, select banks used data augmentation to expand and enrich their datasets. Another method focused on synthesising user inputs.

Banks also paid close attention to multilingual balance. Tests confirmed that balancing English, Traditional Chinese, and Simplified Chinese is critical to prevent data skew and ensure terminological accuracy across language variants.

Data Quality Check

A thorough data quality check is vital to assess the reliability of the data assets and uncover any underlying issues.

Handling missing data was another important part of the process.

This required identifying which fields were optional or frequently absent, then deciding on the most appropriate approach: whether to impute the missing values, label them as “Unknown,” or remove non-essential fields that were consistently incomplete.

The final step was comprehensive documentation, which ensured the entire process could be reproduced and provided a clear reference for future updates.

Continuous Data Updates

key optimisation strategies Source: HKMA

Maintaining model accuracy requires a continuous flow of fresh, relevant data. This would involve implementing automated or semi-automated data pipelines designed to ingest new information from approved sources.

Once collected, this incoming data was processed using the same cleansing and standardisation procedures applied during the initial pre-processing stage, ensuring consistency across the dataset.

With refreshed and expanded datasets in place, periodic model retraining would help prevent performance degradation over time and ensure the model stayed aligned with evolving language patterns, product updates, and current market conditions.

Strong Early Results, With Guardrails Still Essential

The first cohort of the HKMA GenAI Sandbox delivered clear, tangible gains for participating banks.

Fine-tuned LLMs were also able to analyse 100% of case narratives, far surpassing traditional sampling methods. User response was consistently positive as well, with 86% of GenAI outputs rated favourably and more than 70% of GenAI-generated credit assessments considered valuable references.

business outcomes of genai sandboxSource: HKMA

At the same time, the sandbox’s report indicated that risks still must be managed. Hallucinations and factual inaccuracies remain key concerns in high-stakes banking environments.

While advanced prompt-engineering techniques helped reduce these errors, banks recognised that these methods mitigate rather than eliminate the issue. For sensitive applications, additional safeguards like RAG and fine-tuning are still required to ensure accuracy and reliability.

Key priorities for the second cohort include “embedding AI governance within three lines of defence, developing ‘AI vs AI’ frameworks for proactive risk management, and implementing adaptive guardrails and self-correcting systems”.

Featured image by freepik on Freepik

The post Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals appeared first on Fintech Hong Kong.

Piyasa Fırsatı
FINANCE Logosu
FINANCE Fiyatı(FINANCE)
$0.0002105
$0.0002105$0.0002105
-4.88%
USD
FINANCE (FINANCE) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Kalshi Jumps to 62% Market Share While Polymarket Eyes $10B Valuation

Kalshi Jumps to 62% Market Share While Polymarket Eyes $10B Valuation

The post Kalshi Jumps to 62% Market Share While Polymarket Eyes $10B Valuation appeared on BitcoinEthereumNews.com. Fintech 19 September 2025 | 16:03 Event-based trading platforms are no longer niche experiments – they’re emerging as a major arena where finance, crypto, and information converge. After months of subdued activity, volumes are climbing again, and U.S.-regulated Kalshi has unexpectedly taken the lead. Betting on Everything From Rates to Sports Analysts at Bernstein describe prediction markets as a new “interface for information,” where users speculate not only on sports results but also on Federal Reserve decisions, quarterly earnings, and even crypto price moves. This year alone, more than $200 million changed hands on Polymarket contracts linked to the Fed’s recent 25 bps rate cut, while $85 million traded on Kalshi around the same decision. Mainstream brokers like Coinbase and Robinhood are watching closely, with ambitions to capture some of the momentum. With U.S. sports betting already worth tens of billions annually, the overlap is too big to ignore. Against that backdrop, Kalshi has delivered one of its strongest months since the 2024 elections. The platform reports $1.3 billion in trading volume so far in September, accounting for 62% of global prediction market activity. Just a year ago, Kalshi’s share stood at 3%. CEO Tarek Mansour called the growth “remarkable,” noting that the exchange still serves only U.S. clients. Polymarket’s Pushback Its main rival, Polymarket, has logged about $773 million in trades this month. While that trails Kalshi for now, Polymarket has unique advantages: as a crypto-native platform, it has carved out strong global demand and is working toward a formal U.S. relaunch via its acquisition of derivatives exchange QCEX. The two platforms now stand as the clear leaders of the sector, though they embody different philosophies — one regulated from the ground up, the other built around decentralization. Investors Take Notice The boom hasn’t escaped venture capital. Reports suggest…
Paylaş
BitcoinEthereumNews2025/09/19 21:34
Visa Expands USDC Stablecoin Settlement For US Banks

Visa Expands USDC Stablecoin Settlement For US Banks

The post Visa Expands USDC Stablecoin Settlement For US Banks appeared on BitcoinEthereumNews.com. Visa Expands USDC Stablecoin Settlement For US Banks
Paylaş
BitcoinEthereumNews2025/12/17 15:23
Bitcoin Lightning Network Capacity Surges to Historic Peak as Exchange Adoption Accelerates

Bitcoin Lightning Network Capacity Surges to Historic Peak as Exchange Adoption Accelerates

The Bitcoin Lightning Network has reached an all-time high in total network capacity, marking a significant milestone for the layer-2 scaling solution designed to enable fast and inexpensive Bitcoin transactions. The surge comes as major cryptocurrency exchanges increasingly integrate Lightning functionality, bringing the technology to millions of users who previously relied solely on slower, more expensive on-chain transactions. This capacity expansion reflects growing confidence in Lightning's reliability and utility after years of development and real-world testing. What began as an experimental protocol discussed primarily among technical enthusiasts has matured into infrastructure that some of the industry's largest platforms now consider essential to their operations.
Paylaş
MEXC NEWS2025/12/17 17:14