The seminar held on June 24, 2025, "NVIDIA Financial AI Meet-up with Macnica-AI in your own hands. Practical companies discuss the use of financial AI," featured many industry professionals as speakers, providing an opportunity to share the latest developments in AI use in the financial industry.
In this article, Cisco Systems, Inc. We would like to introduce the content of Mr. Hirata's lecture.
At the end of the article, we have information about on-demand videos, so please read to the end.
Sponsor: Macnica
Sponsor: NVIDIA G.K.
| Lecture | Lecture title | profile | |
| 1 | GTC2025 Digest (Focusing on presentations from financial companies) |
Nvidia G.K. Senior Business Development Manager |
Hiroshi Hirahata Mr. |
| 2 | Strategic AI adoption and business-specific use cases |
SMBC Global Investment & Consulting Co., Ltd. General Manager |
Mr. Hirofumi Yamada |
| 3 | Utilizing the financial-specialized model FineNemotron | Senior Solutions Architect, NVIDIA G.K. | Mr. Xianchao Wu |
| 4 | The importance and capabilities of on-premise and local LLM for secure AI utilization in financial institutions | Representative Director and President of Ippu Senkin Co., Ltd. | Hideya Suzuki |
| 5 | Easily start building a local LLM with NVIDIA NIM | Macnica | Mr. Hitoshi Onodera |
| 6 | Expanding Use of Generative AI and Corporate Risks: The Need for AI Security |
Cisco Systems, Inc. Robust Intelligence Country Manager / Cisco Business Development Manager, AI |
Mr. Taiichi Hirata |
| 7 | Kaggle Grandmaster Accelerates Data Science with RAPIDS |
Nvidia G.K. KGMoN(NVIDIA Kaggle Grandmaster) |
Mr. Kazuki Onodera |
| 8 | Accelerating Trading Strategy Development Using RAPIDS and CUDA at a Quantitative Hedge Fund |
Okun Co., Ltd. Project Manager / Data Scientist |
Mr. Nobutaka Takeuchi |
| 9 | Automatic generation of analyst reports using generative AI: Improving accuracy through fine-tuning learning |
aiQ Inc. Representative Director and President |
Mr. Hiroki Yamamoto |
| Special lecture | Working with AI: Using AI to Scale Your Team Discussion Paper |
Financial Services Agency Counselor, Risk Analysis Division, Policy Management Bureau |
Ms. Hozue Igarashi |
Expanding Use of Generative AI and Corporate Risks: The Need for AI Security
Mr. Hirata introduced us to their efforts regarding AI Defense.
The risks of AI and the importance of security
AI is a non-deterministic technology, meaning that the same input does not necessarily produce the same output. Furthermore, there are many different AI models.
It can be said that new risks have emerged that are different from those of conventional IT technologies.
AI risks can be broadly divided into "safety (risks related to quality and ethics)" and "security (risks related to information leaks and misuse)."
Companies and organizations need to take measures from both sides. Hirata emphasized that in recent years, from the perspective of AI governance, measures to address safety risks such as misinformation and harmful content have taken precedence. However, with the expansion of AI use and the arrival of the age of AI agents, measures to address security risks will become more important, and companies will also be required to take such measures urgently.
AI Defense Functionality Overview
"AI Defense" is a comprehensive security solution designed to support the safe use and development of AI.
・Management of AI application usage:
A function that visualizes and controls connections to external AI generative services from within the company.
A system is provided to prevent the risk of information leaks that occur when employees enter internal company information into ChatGPT, DeepSeek, etc.
・AI application development support:
It visualizes AI models distributed across multi-cloud environments such as AWS, GCP, and Azure, and assesses model vulnerabilities using automated red teaming for AI.
During operation, prompt input and output are monitored in real time, providing guardrails that take action in accordance with policy.
These features enable consistent security measures from development to operation.
Risk assessment techniques and governance initiatives
One core technology of AI Defense that is particularly noteworthy is the automation of "red teaming."
This technology automatically generates and evaluates attack scenarios for AI models, and uses methods such as "Tree of Attacks with Pruning" to systematically verify what prompts pose risks to AI. This makes it possible to detect vulnerabilities that developers have not anticipated in advance and take measures to address them.
Additionally, the guardrail function with real-time monitoring provided during operation can be integrated with NVIDIA NeMo Guardrails, enabling stronger security tailored to each company's individual needs.
Furthermore, Cisco is not only providing AI Defense, but also has its AI security experts participating in leading international standardization organizations such as OWASP, MITRE, and NIST. Here in Japan, too, the AI Governance Association will work with companies, governments, and academia to contribute to promoting the social implementation of AI governance.
Simply register to watch the video
If you register using the form below, you will receive a URL to watch the on-demand video of the seminar by Mr. Hirata of Cisco Systems that we introduced this time. If you missed it or attended the event and would like to watch it again, please take this opportunity to register!