Currently, document creation is limited to some authors while there are many session participants or key note speakers.
By introducing a solution where conversations during calls and other sessions are automatically documented, firstly, we can make all session participants into document creators. And if we can easily incorporate external videos as well, we can create a searchable library that archives and documents almost all discussions in crypto space. I think this could create an incentive for many people to access the website and participate in the discussions.
So to speak, by incorporating AI writers, in addition to the documents created through the current methods, we could consider creating new types of documents and steering towards making it a forum for discussions that everyone wants to participate in.
I’m not 100% sure if this aligns with BGIN’s policies, but I believe it could certainly increase the number of participants.
Thank you for your suggestion! That’s one of the challenges we face - I totally agree.
We regularly update the recordings of our WG meetings but I can assume that few people watch them. It would be great if we could do it using AI. A concern I can think of is whether we are allowed to use AI for all the conversations. Sessions under the Chatham house rule should not be directly feed on AI. @shinichiro.matsuo
Is it possible to use local LLM?
Shin’ichiro Matsuo, Ph.D.
Research Professor, Department of Computer Science, Georgetown University Cyber SMART Director, and Lead Researcher, Blockchain Eco-System
Thank for the feedbacks!
I truly understand your concerns since cryptography itself is one of the few mechanisms to stop AI threats. It’s not a matter of cost but matter of raison detre.
A concern I can think of is whether we are allowed to use AI for all the conversations. Sessions under the Chatham house rule should not be directly feed on AI.
I think that published videos can be fed to AIs, and unpublished one should not be.
Is it possible to use local LLM?
Local means that just local LLM, not local LLM + local cloud, is this correct?
So it’s a matter of cost regarding these concerns imo.
Using existing AI services is just connecting their APIs.
The local LLM approach has the cost of building LLM + sever sides.
The local LLM + local cloud costs building LLM+server sides + server infrastructure sides.
Starting with existing AI services and shifting to local is also fine, imo.