Private AMA: Tortuga Telegram
Tortuga is private Telegram and exclusive investor group home to whales, Web3 marketers, project owners, and other industry figures.
Disclaimer: The summary and transcript below were generated automatically from the full AMA audio. They may contain unintentional inaccuracies. Please refer to the original recording for verbatim quotes or affirmation of details. Notify us of any factual errors so we can correct the transcript. We make no guarantees about completeness or total accuracy.
Summary
I. Introduction
Ganyo begins the AMA by welcoming everyone and introducing AlphaKEK, describing them as an interesting AI venture focused on building unbiased AI infrastructure tools and applications for web3. He states the speakers for today will be Vinny and Vladimir, two members of the AlphaKEK team.
II. Team Overview
Ganyo passes the discussion off to Vinny and Vladimir to introduce themselves and provide an overview of AlphaKEK and their backgrounds.
Vinny introduces himself as the marketing and business development manager at AlphaKEK. He states this is their first AMA and thanks Ganyo for hosting and the listening group for participating. He describes the group as well-curated and private, which he appreciates.
Vinny provides a high-level introduction to AlphaKEK, describing them as an AI lab that has built a custom AI engine specifically for crypto data. He notes their ticker symbol is $AIKEK on Ethereum. The team consists of founder Vladimir, who is also on the call, plus 4 other core contributors including himself.
Vinny explains AlphaKEK is the culmination of Vladimir's life work and experience as a data scientist and AI engineer. Vladimir has been published by leading AI companies OpenAI and Nvidia, demonstrating his expertise.
Over the past 6-8 months during the crypto bear market, the team has been fully focused on building and completing their AI engine along with associated products like a web app, Telegram bot, and proofs of concept with partners.
Their goal is to help users gain insights from crypto markets faster and more conveniently than existing solutions. They are getting close to having strong alpha signals by continuing to integrate more data into their engine, especially on-chain data.
On the consumer side, AlphaKEK offers products including a simple chatbot accessible via their web app and Telegram. They also have a research assistant tool users can leverage for generating content, hunting for alpha opportunities, education, and community engagement.
For the B2B side, they target crypto projects, blockchains, and ecosystems to provide customized chatbots, analytics sites, Telegram and Discord integrations, and web apps.
Vinny emphasizes AlphaKEK has taken the long road of building their own unbiased and uncensored AI model along with the infrastructure required. Now they are rolling out the novel products their community has been utilizing and benefiting from.
Ganyo follows up by asking Vinny to provide more details on his own background, noting Vinny's experience spans traditional finance and crypto.
Vinny shares he has a strong mix of experience across traditional fintech, web3, and now AI - over 17 years in finance. His background began in banking and wealth management.
As an entertaining side note, Vinny reveals he used to work for 10 years with economist Peter Schiff, well-known as one of the biggest Bitcoin bears today. However, Vinny explains the early libertarian gold bugs like Schiff actually share a similar ethos as the cypherpunks and anarcho-capitalists who formed the early Bitcoin community. Both groups distrusted central banks and fiat currency, wanting alternative sound money outside government control.
After working in the highly regulated world of banking, DeFi was a hugely refreshing breakthrough for Vinny when it emerged in 2020. He saw DeFi solutions like Uniswap and decentralized lending as using code and algorithms to solve financial problems, removing middlemen and complexity.
Vinny became a partner at a DeFi aggregator platform offering services like swapping, borrowing, and earning yields through simple transactions built on smart contracts. In 2020-2021 he helped integrate the platform with around 13 different blockchain protocols, getting grants from Polygon, Moonbeam, NEAR, and others along the way.
He also built an early AI product on the platform - a token recommendation engine that scanned a user's wallet, identified similar profiles, and suggested additional tokens to buy.
In 2022 Vinny did a brief sidestep into NFTs, launching some collections on Solana. He also created an AI music generator NFT product that creates customizable songs from the artwork.
When crypto bear market conditions returned, Vinny went back to focus on traditional finance work. He currently works in Singapore overseeing the launch of one of the world's largest vaults for alternative assets like precious metals, luxury watches, and art. These assets are tokenized on blockchain so they can be easily traded peer-to-peer with transactions tracked in real-time.
About 9 months ago, Vinny discovered AlphaKEK in the 4chan biz forum, which focuses on business and finance. He joined the early community and became a large token holder. Vladimir eventually offered him a marketing role at AlphaKEK, and the rest is history.
Ganyo responds positively that this background gives great context on Vinny's relevant expertise across various markets. He notes Vinny's experience bridges the still tiny 6% global adoption of Bitcoin with the much larger traditional finance world. Ganyo agrees this will be very valuable for Vinny's current role at AlphaKEK.
Vinny adds some additional thoughts:
- In his work tokenizing alternative assets, he struggled to find AI platforms that could generate quality real-time alerts and content related to them. Most platforms had weak generic output like "diversify your holdings" or "talk to an expert".
- He wanted to find an uncensored AI model coupled with uncensored data, which the 4chan biz forum and AlphaKEK provided.
- AlphaKEK integrates both alt finance data as well as mainstream sources like Bloomberg and Cointelegraph.
- They are also now integrating on-chain data which requires additional technology they are calling Alpha AGI. This handles the complex data transformation process.
- Many current AI crypto projects are hype-driven, quickly launching basic alert and sniping bots. But AlphaKEK has taken the harder, longer road of building real infrastructure and now has quality output to productize.
- For example, they can now template the AI's output for things like alerts, sentiment analysis, etc. They can replicate and improve the popular sniping and alert products by adding both quantitative data like price/volume as well as qualitative data like sentiment analysis and bullishness scores based on the AI's assessment.
- He welcomes any suggestions from Ganyo's community on specific methodologies for alerts, trading signals, bots, etc that they'd like to see productized. Since AlphaKEK has built the infrastructure and engine, creating the Telegram bots and interfaces is easy.
III. AlphaKEK Products and Offerings
Ganyo notes AlphaKEK has said they are an AI lab offering multiple products, not just a single solution. He asks Vinny to summarize how many AI products they currently offer and provide an overview of each one.
Vinny explains:
- On the B2C side, they have a chatbot and a research assistant tool.
- For B2B, they offer custom AI chatbots tailored to each business customer. This is a proven go-to-market strategy in AI exemplified by companies like PAAL AI.
- AlphaKEK creates white labeled custom chatbots for any crypto projects, blockchains, ecosystems, and brands wanting to better engage with their communities. Use cases include improving user onboarding, education, research, and fun community engagement.
- They also have several current clients they can discuss later.
Vinny reiterates their end goal is to provide users with faster and more convenient access to unbiased insights from crypto markets compared to existing solutions.
He notes there are many AI projects in crypto specifically racing to offer some form of predictive alpha signals and trading bots. As AlphaKEK continues integrating more data sources, especially on-chain data, their alpha insights are improving.
In Vinny's view, the main problem AlphaKEK is solving is that most existing AI tools, especially in web3, fail to provide sufficiently high quality output for crypto research and finance use cases.
Many of these tools rely on centralized APIs like ChatGPT, which cannot provide any crypto-related insights or alpha. So AlphaKEK spent significant time building their own proprietary unbiased and uncensored AI infrastructure and model tuned specifically for financial data.
Ganyo follows up asking Vladimir to explain the core AlphaKEK product called Fractal, which their whitepaper describes as a "data engine". Vladimir summarizes:
- At a high level, Fractal does two key things - data ingestion and data retrieval.
- For ingestion, it takes in various types of data relevant to crypto and finance, both on-chain data from web3 as well as relevant web2 data.
- It unifies all this data into a form most convenient for language AI models to process and understand.
- As new data comes in, Fractal checks it against the existing data and constructs a knowledge graph to represent relationships between the different data points.
- This whole process is computationally intensive, so AlphaKEK uses a hybrid cloud infrastructure. Some components run on their own GPU servers while other parts utilize cloud servers.
- The benefit of this upfront computing is it makes retrieving the data extremely efficient. When a user asks the AI a question, it can pull much more relevant information from the knowledge graph compared to AIs that lack this stage.
- More initial data leads to more detailed, higher quality answers for users.
Ganyo summarizes that the system evolves over time from user queries, building up knowledge to provide better answers. Vladimir confirms this interpretation is roughly correct. An example is insights generated for one user's custom report are saved and provided to the next user asking a related question.
Ganyo concludes that Fractal seems to be the powerful engine powering AlphaKEK's other AI products, with each offering building on top of this centralized knowledge base.
Vinny agrees with this assessment. He reiterates AlphaKEK has taken the more difficult but rewarding path of deeply investing into core infrastructure like their uncensored AI model, knowledge graph compilation, and aggregating massive amounts of quality data from diverse sources.
Now the payoff is new capabilities are organically emerging, like sentiment analysis tools, data visualizations, and token audit features.
At some point AlphaKEK can take their high quality output and easily template it into popular applications like alerts, sniping signals, etc which are in demand but better than competitors by incorporating both quantitative stats as well as qualitative insights.
He welcomes any suggestions from Ganyo's community on specific products or use cases they'd like to see. Since AlphaKEK has built the core engine, creating the wrappers like Telegram bots is straightforward.
IV. Competitive Advantage
Ganyo notes AlphaKEK has said they offer paid AI apps, following a proven model like PAAL. He asks what makes AlphaKEK differentiated and superior to PAAL for enterprise crypto clients.
Vladimir outlines two key differentiators:
- Data Volume and Processing
- AlphaKEK's Fractal data engine can process much more data than competitors due to how they structure and store it. More data leads to better insights.
- For smaller clients with minimal proprietary data, AlphaKEK can still provide value with customization. For example, the default chatbot is neutral but clients can request a provocative persona like Sam Bankman-Fried.
- Handling Large Datasets
- For established projects with vast amounts of data across documentation, articles, years of content, other platforms cannot properly handle or take advantage of this.
- But AlphaKEK can smoothly integrate these large, enterprise datasets to train the AI and provide the best experience tuned to their project.
- For example, they created custom analytics for a well-known ecosystem client by ingesting all their website data over many years from their docs, APIs, etc.
- The client was surprised and impressed that AlphaKEK's AI could uncover insights they didn't even know it had access to in this proprietary data.
- Now this process has been streamlined and enhanced to easily integrate large, complex enterprise datasets.
Vinny adds some supplementary perspectives:
- This enterprise data integration provides value through better user acquisition, onboarding, and internal education about the client's ecosystem. Their developers and team can use AlphaKEK's AI to assist with research.
- As an example, Vinny wanted a custom AI persona bot for the SBF meme coin community, modeled after SBF and Caroline Ellison.
- He initially tried PAAL AI's free bot but found the output very generic, similar to ChatGPT. It couldn't provide the edgy, provocative responses the community wanted.
- The free PAAL also couldn't answer specific crypto research questions about analyzing on-chain data, emerging tokens, upcoming catalysts etc.
- Working with AlphaKEK provided a much better solution with custom personas tuned to the community's expectations, along with crypto research capabilities from integrating on-chain data.
- This was a successful proof of concept that AlphaKEK can now productize and scale up through their B2B white label offering.
V. Business Model
Ganyo moves the discussion to AlphaKEK's business model and go-to-market strategy. He asks Vinny to describe their target demographic and what they look for in terms of user conversion and retention.
Vinny explains:
- For B2C, AlphaKEK originally grew out of the niche 4chan /biz/ community, given Vladimir's connections there.
- Now they are branching out to target more mainstream power crypto users who love experimenting with cutting edge AI and technology.
- On the B2B side, they target the biggest crypto ecosystems that seem most open to grants, partnerships, and mutual value creation.
- For example, some hyped L2 chains like Base have potential. Vinny has been active on BASE as an investor/trader and knows some key members of their team.
Ganyo asks if AlphaKEK has an ambassador program to spread awareness. Vinny notes:
- They have a marketing advisor well-connected in crypto with a large network and major influencer contacts.
- Currently AlphaKEK is focused on organic community outreach on Twitter, identifying and engaging with AI-focused accounts in web3.
- They are steadily compiling lists of key crypto figures who seem interested in AI, as it is a growing trend.
- When ready, their advisor can activate these influencer relationships to grow AlphaKEK's visibility.
Transitioning to tokenomics, Ganyo asks how many tokens exist and key details about the economic model. Vladimir explains:
- There is a fixed supply of 256 million tokens, of which 11 million have already been burned.
- The team has not done any manipulative actions with the supply. Burns have come from the autonomous tokenomics.
- There is a 4% tax on buy and sell transactions. Per the community's vote, 33% of this tax plus all revenue goes towards buybacks and burns.
- So far they have burned over 4% of total supply and will continue aggressively burning more, making AIKEK deflationary.
Ganyo asks what motivated the buyback-and-burn mechanism and how it works in practice. Vladimir notes:
- Originally on Polygon, the 1% fee was just to sustain operations, with premium features paid in tokens.
- But with the move to high-fee Ethereum, requiring frequent transactions would be problematic for users.
- So a higher one-time fee makes more sense. It also incentivizes users to invest in the token to gain app access.
- As AlphaKEK's products and B2B business generate revenue, they use a portion to buy back and burn tokens. This makes the remaining supply more scarce and valuable.
- It rewards early adopters who supported the project before product-market fit. Their tokens automatically increase in value from the burns.
- In the future if they develop extremely advanced premium features needing lots of compute power, they may introduce another tier or make those specific features paid. But core access will remain tied to AIKEK holdings.
Ganyo asks how AlphaKEK prevents the increasing token value from hampering new user adoption over time, as the buybacks accumulate.
Vladimir acknowledges discussing this issue extensively internally. Some solutions they're considering:
- Currently tiers are bounded by dollar value not an absolute number of tokens. So new users face the same dollar-based barrier.
- But early buyers get the benefits of auto-upgraded tiers as buybacks increase value without any extra effort on their part.
- For any future premium features they may have to add a new top-tier or make those completely paid services rather than bundled with token holdings.
- Or they could denominate the tiers in a stablecoin to prevent volatility in the entry price. But for now, dollar values make the most sense for on-boarding.
Ganyo asks Vinny to discuss the marketing procedures and strategy AlphaKEK has pursued up until now, and how effective they've been.
Vinny explains they are just kicking off a more full-fledged marketing campaign, focused on:
- Creating a lot of content aimed at improving conversion rates, education, and overall appeal to their target demographics
- This includes new landing pages, updating the website, producing video footage showing product usage and value.
- Iterating on messaging to make their unique value proposition clear, both for B2C and B2B audiences.
- They are in discussions with various major crypto ecosystems about integrating AlphaKEK and receiving grants to fund growth.
- Vinny has successfully secured similar grants before so has relationships to leverage.
- The integration discussion is different for an AI project compared to DeFi - it's really just about the data. So there is some negotiation around the minimum integration required to partner, get a grant, and start deriving mutual value.
- For B2C go-to-market, currently the product is like an open canvas that early adopters are using in very diverse ways.
- AlphaKEK needs to do more curation and create templates guiding users to leverage the AI for specific, high-value use cases.
- For example, creating customized alert and alpha signal bots that are tailored for particular trading strategies, risk profiles, etc.
- Distribution will be very easy by leveraging their Telegram chatbot and integrated reporting features for frictionless onboarding.
VI. Conclusion
As the AMA wraps up, Ganyo asks about any key topics that weren't sufficiently covered or any points AlphaKEK wants to emphasize.
Vinny notes Vladimir provided only a brief background summary earlier but has an impressive history of relevant experience and pioneering AI work that he can highlight:
- Vladimir summarizes he began coding at age 13 and built his first AI models at 17, so had an early start in the field.
- At JetBrains he worked on large-scale code analysis and trained some of the earliest GPT-2 based code autocompletion assistants and productivity tools.
- His embeddings work was published by OpenAI and applied to parsing and analyzing decades of astrophysicist emails to answer questions.
- He sees web3 as a natural fit given its emphasis on openness and transparency with data, in contrast to web2 which is increasingly limiting data access (e.g. Reddit, StackOverflow).
- All the historical data AlphaKEK is accumulating allows them to backtest and cross-reference with web2 sources to uncover insights like who had insider info first.
Ganyo compliments Vladimir on his impressive background and experience, especially for being only 27 currently. He agrees it demonstrates the team's professionalism and passion for pushing the AI field forward, as well as the huge potential still ahead to take AlphaKEK to new heights as they continue developing.
Ganyo concludes the AMA has covered the key points well. He thanks Vinny and Vladimir for joining and having such an engaging discussion.
Vinny reciprocates appreciation for the invite and chat, saying they would enjoy participating in another AMA with Ganyo's group in the future.
Ganyo ends by encouraging listeners to stay active in their crypto investment communities, continue learning, and who knows - they may get added to Tortuga, their exclusive private group, someday in the future if they keep sharpening their skills and network.
Transcript
Speaker 1
Hello everybody and welcome to another AMA at Tortuga. Today we have today I'm your host ganyo and today we have AMA with AlphaKEK. So AlphaKEK is an interesting venture and in this case I usually put a summary but I think they've really summarized it well here because AlphaKEK. Is an AI lab that powers web three tools and applications with the next-gen, unbiased AI infrastructure. Today's speakers are going to be Vinny and Vladimir, which is a very pleasing thing to say. I love alliteration, so let's pass off to you guys. First and foremost. Right. Please feel free to give yourself a bit of an introduction and tell us a bit about your team.
Speaker 2
Yes. So I'm Vinny and I am the marketing and business development manager at Alpha cuck and this is our first AMA. So, yeah, thank you, Ganyo, for hosting us and you know, thanks for this group for listening. It's it's a very well curated group and and very private, which I like.
So yeah, AlphaKEK, we're we're an AI lab, and we built our own custom AI engine, and it's specifically for crypto data. Our ticker is ticker $AIKEK on Ethereum and our team is composed of our founder Vladimir, who's on this call and four other core contributors, including myself. And we're essentially a culmination of vladimirs, you know, life work as a data scientist and AI engineer. He's been published in OpenAI's blog as an early adopter of open AI, as well as NVIDIA's site.
We basically spent the last six months—actually it's probably more like 8 months—now if you include pre-development through the through the bear market building completing our engine with some you know front ends we have a web app, a telegram. Bot. And several proofs of concepts. With the clients and partners projects that we'll, you know, we'll showcase a little bit later. And yeah, what we do, we basically built and operate our own uncensored, unbiased AI model and it's it's fine-tuned for financial data. So both D Phi and track Phi data. And we have a consumer offer as well as a B2B offer, you know, on the consumer side, we have several products. It's, you know, simple chatbot, but it's accessible through web app and telegram. We have a research assistant and this is used for like content generation it's used. Or hunting down alpha. We're almost there. Education, community engagement, and yeah, it's just a very fast, you know, focused on crypto. It's unbiased, which is ideal, and on the B2B side we we target crypto projects, blockchains, ecosystems, brands. And for them we we create like custom chat bots and analytics sites, telegram and. Discord. As well as web apps as well, we do have several clients that we can get into as well. And yeah, while we got into the space, you know our end goal is you mentioned. Alpha, right. So it's to help the users get insights from the markets faster. You know, more conveniently than existing tools. We're getting pretty close to alpha, you know, this is something that I think a lot of AI projects, especially in this space are, you know, it's an almost race to get as much alpha as possible. You see these alert bots, alpha bots, you know, being. Launched but you know, as we're integrating more and more data into our engine, especially on chain data, we're getting really good results. And yeah, I think the main problem we're trying to solve is, you know, you might have experienced this in existing AI tools, whether it's in Web 2, but especially in web three, a lot of the output just simply. Isn't good enough for for finance and crypto research. And yeah, there's a there's a there's a reason. For this. Right. You probably know that a lot of these tools are built on major LM's. You know, API's, ChatGPT. For you know, cloth 3 for example can't even, you know, provide you crypto output. So yeah, there's a lot of tools out there that are just built on a very centralized foundation, a very weak foundation. And that's why we spent a lot of time just going through the long road of building our own infrastructure. And now that we have, we're going to start rolling out, you know, pretty cool products that our community has been. Using so far. So that's the 30 foot, you know overview of of the project and we can get into our our backgrounds in a bit as well.
Speaker 1
Well, sure. Actually, let's get into backgrounds a bit about you, Vinny, because you talked a bit about that in here. Now I want to learn a bit more about more about you, so try well defi isn't your first touchdown phase, is it especially with the financial world and there was a lot that we had to actually discussed previously. So do you mind just giving us a little bit? Detail on that. And why you're uniquely suited to be. In the position you're in now.
Speaker 2
Yeah, sure. So I've I have a pretty good mix of track 5 fintech and web three and and now AI as well about 17 years in finance. And I saw it out in banking and wealth management. Funny story, I used to work for Peter Schiff, the Economist for about 10 years. I launched several of those products and yeah, that might seem strange. Because he's probably the biggest Bitcoin bear in the world right now. But you know it makes sense if you look at a lot of the gold bugs and the early Bitcoin. Oldies, you know, they were philosophically into like selling money and they were very libertarian. They were very, you know, anarcho capitalist. And that's before, you know, Bitcoin. They were gold bugs, silver bugs. Like, if you look at Max Kaiser, you know pre 2012 feet he had a campaign called buy Silver Crash, JP Morgan. So I think we share a lot of the same ethos. You know the gold bugs and and and Bitcoin. Even though we **** heads once in a while. So yeah, I worked in banking for quite a while and I saw a lot of downsides, you know, regulatory issues complex. Planes. It just made things a lot harder to service my clients. And on D5 Summer 2020 happened. It was a huge breath of fresh air because, you know, you had basically code and and algorithm solving a lot of financial transactional problems, right? Unit swap, solve, the ability to swap, you know. Currency exchange, right. You had all of a decentralizing lending, you know, taking out the middleman to have fair interest rates. So yeah, it was a huge proponent of defi. Well, I still AM. And I became a partner in AD 5 platform. Which was a simple irrigator similar to zapper and zerion. You know, we're basically offering swapping, borrowing, you know earning, you know, simple transactions, but that project in 2020-2021 allowed me to integrate with many different blockchains and protocols. So, you know, I integrated with yeah, like about 13 different blockchains. Protocols I got to know a lot of these, these these blockchains and networks got grants from, you know, Polygon, moonbeam, near a few others. And I also was able to launch an early AI feature. This is pre ChatGPT. I didn't really catch on unfortunately, because it was a. Bit. Early, but it was called a token recommendation engine and essentially connect your wallet. It scans your wallet and it compares them against, you know, hundreds of thousands of other wallets and it will recommend certain delta or some similar wallets. So yeah, I was an early passion and in 2022 I took a side step into NFT's. I launched a few collections with some major projects on Solana, and that was more of a passion project. I launched, I created a an AI music generator and streaming platform and basically uses different AI models to view your NFT and generate a custom 101 song from that. So you can actually listen to your your collection. So yeah, Long story short, Bear market hit. I went back into track 5, which is my first love and right now IRL I work in Singapore. I'm overseeing a the launch of one of the world's largest vaults for alternative assets. That's metals, luxury watches, luxury art, other assets too. And we essentially tokenize those. That's we built, say, a peer-to-peer lending platform on top of that with blockchain tracking. So you can track every single asset you know in real time publicly. And yeah, in about nine months ago, I I discovered alpha tech. I was an early community member. I found it in the bowels of 4 Chan in biz, which is the business and finance form, which I frequent pretty often. And yeah, Vladimir is is quite popular on there. You know we both know a lot of the admins and we you know I I saw Alpha KEX first post in in biz and I joined the community became a large holder and then yeah I want to be more proactive and you know as things got going as the market starting heating up. Vladimir extended a marketing role to me and. Yeah, the rest is history.
Speaker 1
Alright, thank you. And honestly that gives me a good impression of what your specialties are. I mean if you understand multiple markets, you basically open up the what 6% market penetration that Bitcoin has on the global market up to up to the rest of the world. And it really does open a lot of doors for you. Especially when you build those connections early and when you have such a proven history and track record, which are all very positive things for the position you're in. Now. So let's.
Speaker 2
I do want to. I do want to mention the reason why I liked alpha packs so much is because in my RL work you know creating content for these alternative assets. I couldn't find a platform an ad platform that would allow me to do once great bids in real time like in sort of like an alert but.
Speaker
No.
Speaker 2
Also write really. Good financial content because the output from some of these platforms are so bad. It's like, well, just diversify your holdings or talk to an expert. So I looked for an uncensored model first and then it was biz. So it was a perfect marriage of the content that I was trying to create and that, you know, yeah, kick is integrating other mainstream data as well, you know, Bloomberg. Coin telegraphing things like that, but that was the inception.
Speaker 1
I think that's something a lot of people are looking for because one of the great sort of issues that's talked about, in fact even the consensus right, one of the great big issues is talk about is the learning curve to actually enter into crypto and AI can actually simplify that learning curve by sort of not really dumbing down, but focusing on the parts that people need to know. Rather than the whole story. And I think that in itself speaks to the power of AI in the industry. It really does give people a competitive edge, even if they are new to investing or trading cryptocurrencies. Would you agree with?
Speaker 2
That yeah, I mean, it's essentially taking huge, vast amounts of of data. And finding, you know, two seemingly different unrelated data points, but making connection and that's across, you know, all modalities, whether it's, you know, DEX data on chain data, smart contract code obviously is news both mainstream and obscure. Obscure forms and maybe things like Satoshi's emails. Recently, that's integrated into AI Tech, so it's about finding data points that are trending at the same time in very far distant areas, and we're sort of trying to do that manually, right? Like I I look for things that are trending on biz, but that are also have. Are validated by on chain movements and momentum, but just doing that manually is unfeasible and it's it's takes a long time and it takes you know hours.
Speaker 1
Yeah, I agree with that. And this is this is the same thing. I'll, I'll give you a little context here. AA groups have the same thing, right? All the owners talk to each other for sure. So projects come in, they filter, they go to one group and suddenly all the groups know. About. Them, but the difficulty is then trying to actually find individuals, right? Trying to find people who are interested in taking the name here, taking a name though it's time consuming and this is something you can speak. To so it is. It is a sort of drag. In order to. Find those high level things alpha essentially, so it's an essential part of even outside of investing. It's an essential part really of the DFI ecosystem even on the marketing and services side. So I really do think there's a lot of potential for growth in the future. That particular direction as well, but.
Speaker 2
Well, maybe, maybe, maybe there's a custom Portugal alert bot where we can scrape AMA's that are trending and and create an analytics dashboard, you know. I was.
Speaker 1
Yeah, exactly. I mean, all of all of. This is really. Really fun. You know, AI is like NFT, if you know how to do the contract properly, you can achieve almost anything. You may have to, you know, put in several like redundancies in order to actually make it work. But. It's it's an efficient path to your destination, and that's what I enjoy about AI at the end of the day, it's all about generating funny pictures, although you know that's what most people use it for, their low level API's. But it's about simplifying the way we work and increasing productivity without decreasing the impact of human input, which I think is just a fantastic thing to be including into crypto because the faster this place goes, the more volume we're receiving and the better people are equipped to deal with certain issues and. And you know, deal with certain problems that they face. Like I've, I have this thing. It's launching at this time. I can't make it. I don't have. Anything else that? Time. It is. It is something that you want to target down on and something that has been an issue in the space for a very long time indeed. But we will get to the custom bots later I believe, so are some generalist questions. First, just to give everyone a little bit of context on the history of Alpha Tech. So you were on Polygon initially and then and then you migrated to Ethereum, right?
Speaker 3
Yeah. Yeah. Hi, everyone. I think it's my turn. To give you. Like a a little bit of context. Yeah. So as when you say you started Alpha keg first as a I real time tool for analyzing portion trends simply because like. They update in real time. They have public API and it's easy to get this data and I thought like that providing an uncensored model connected to uncensored data would be a really cool experiment which turned out to be like much more powerful than I expected and. Yeah, back at the time. Uh. I thought that we will have probably some more on chain specific features and we decided to launch on Polygon to make sure our guest fees will be minimal, but we like almost immediately we get a lot of feedback, a lot of people wanted to us to switch to. So we had like a lot of messages saying the moment you will bridge decision, we will buy your talking but not now. Who the hell uses Polygon, etcetera, etcetera. So after some consideration, yeah, about three weeks after we launch it, we decided to switch. And we basically created a snapshot of of holders and. Redeploy it. The token like airdropped it and continued working on the theorem right?
Speaker 1
Yeah. Honestly, I think it was a smart move with volume spiking up and if they're, I'm reaching you all time highs. It's actually a good time to be moving towards an L1 and this is this is what I said in the bearer on as well to people who are listening back then to me, when you when you're in an L2, it's good in a bear because you're you're stagnant. Not moving much and that's a good thing. Whilst everything else is going down, you're pretty much saying at the same level, but when volume kicks in, you want to be able to have that migration to Ethereum. And that's because there's more opportunity. There's more volume. Yes, there's more risk, but it's deadened by the fact that you have. Have more projects launching and succeeding than you do in a bear, so I think I think the reasons are sound, but now I want to ask besides, besides from that, how did the transition from Polygon to Ethereum sort of impact your existing user base? And you know, were they still able to access your products? Was it chain reliant?
Speaker 3
Oh, since Polygon is L2, it was like extremely straightforward. So if you have. A wallet there. Of course you have the same address with the same private key on the stadium chain. So if we like users did, didn't have to change anything just to log in with meta mask just once again on our website after migration was over and that's it.
Speaker 1
OK, fantastic.
Speaker 3
Yeah. So we still have like. Are trail of like airdrops and sometimes people are asking us, hey, what's this the wallet did like initially at launch and we have to sometimes point people to our like previous posts with all the public discussion about should we move from Polygon to the theorem or not. But aside from that yeah. That went like seamlessly. I was really, really nervous. But then because it was my first experience of like bridging the project like this is already online and I'm glad everything went OK.
Speaker 1
Fantastic. I mean, OK, now I've got to ask, right, because you you mentioned the point about the transition to to Ethereum in terms of scalability, in terms of user adoption. You know opportunities are better in Ethereum, but there's one big issue that people face when when they are moving a project or their migrating project which is. The increased transaction costs not only for actually redeploying the contract or deploying the contract onto a therium, but also user cost as well. So has that negatively impacted your experience? Has that been a challenge that you've had to overcome?
Speaker 3
I think lately, like over the last week when we had crazy high gas fees, yes, of course, but generally at least for now, we don't have any special paid B2C services, only B2B ones. So if you want like premium access to our. AI apps all you need to do is to be a token holder. So basically you pay your guest fees once and then. Our system will just check if your wallet has enough tokens for the premium features, and that's gas free, so hopefully it's OK. But of course, like we are considering having an option at some. That will change in the future. Yeah, probably like, since the very beginning. This was the ideal move we having talking on on the main net and like and the ability to reach the two, not vice versa. But it is what it is.
Speaker 1
OK, fantastic. So yeah, that's pretty much all my generalist questions. Let's get a bit more into the utility side of things before this drags on a little bit more. So I want to find out about these. The high-powered utilities because what you what you say is you're not offering just one? Thing. In fact, you're an AI lab, just like you know there can be NFT labs or things of that nature. So how many AI products do you actually offer to date, and what are they?
Speaker 2
Yeah. So on the BBC side, there is a a chat bot and there is a reset. Assistant on the B2B side, we have a custom chat bot, so this is a A A go to market strategy that you know Paul AI pretty much like proved and and just you know at scale. So creating just white labeled custom chat bots for projects web three you know, blockchains ecosystems. Anybody who has a community. Who needs to create certain level of community engagement? And this is for like good user onboarding. It's also having for having fun right? Joking and. For us, because of our our data and you know all the data points that we've ingested, you know 35K or more data points which Vladimir can talk about, it's also used for crypto research. So you can actually ask questions about tokens. It's integrated with deck Screener, data point, gecko data, you can. There's actually a new command that allows you to do a token audit as well, so yeah, it's really custom chat bots on the B2B side and analytics sites. So right now we are. We're going to announce later this week. Maybe this is alpha for for this group, but it's a well known ecosystem that we created a custom analytics site for with both public and private data for for their community. Yeah, I'll. I'll leave it at that. Maybe Vladimir mentioned their name in a. 2nd. But. Yeah, maybe you can elaborate on those products, ladimir.
Speaker 3
Yeah, yeah, of course. So the core of the Common Core of of our products is what we call alpha fractal is basically like and AI data engine that I was developing since the last summer. And the the key idea of it is to. Take uh various D Phi and thread 5 related data from both the web 3 on on chain data and from the web two and. Combine them in a way that will be the. Combine them in the form that would be the best for the the E models to understand. So basically what what we have we we take like a lot of unrelated data merge, merge it together and feed this into our like. Front end, which to user reps. So yeah, we have just the chatbot available, both like in Telegram and in our web app we have a space.
Speaker
So.
Speaker 3
Report feature which performs deep search across the web. Three data scans additional like sources, tries to search websites for some specific topic that you are interested in. It takes a longer time to generate an answer for your question, but the the strife. This answer to be as comprehensive as it's even possible and. Them. Yeah, we have visualization tools because we are not just checking the current Unchained data in case of the three or just for example, the use for the web search in case of the tool, like a lot of our competitors are doing, we are building knowledge graph with all the historical. Since since the September of last year and we use this data not just to provide the answers inside insights and to do analysis, but also we show like market sentiment trends, yeah, for this time range and you could try this in our both. Like right now and in the future we will be using this data to train some additional models. So for for custom, for AI agents, etc. And what this means for end users is that basically we can train our IE AI on much higher amounts of data than our competitors can and we can do this in the efficient and quick way and for end users for you to see that simply means that. We have more various data than the others for B2B partners. For like other request systems or channel projects or else that means that we could. Quickly and easily train our AI on their own specific data related to their project, their news, their documentation, marketing posts, etcetera, etcetera to make to provide the best experience and burden for the users. To engage them, to give them information about their projects etcetera, etcetera. And yeah, I think we could share the thing that we mentioned. Yes, we are preparing an announcement. An announcement about our partnerships with multiple projects in every ecosystem they are use their either pay network for making like guess payments and for decent. And project that uh. Basically aggregates various news about decentralized infrastructure and what we can do, what we did with them is that we basically took all the data that they have in our in their websites and their their APIs and train our our AI on it. And as a result, if you have like some really niche questions about their. Infrastructure about their technology, you can ask our AI about it and get the answers. And when the every that started testing it, testing it, they were surprised by how well it works and they they even say we didn't even know that it has the access to like. These pieces of data, so it's it was really cool to see that even they are impressed, yeah.
Speaker 1
OK, so there's actually an incentive here with the partnerships that you've mentioned for projects to actually approach you. If they have a more technical side to their development arm, right, because it makes it very easy for their audience to then reengage with what they're doing in their communities to reengage, which increases the amount of exposure that. Everyone has on both sides.
Speaker 2
Yeah, user acquisition, onboarding, you can call this like internal education for these ecosystems because they're on board and you know developers, core contributors. So that's that's definitely an important use case. Yeah, just just education and internal internal.
Speaker
Uh.
Speaker 2
Assistance.
Speaker 1
Right, fantastic. So let's get a little bit more into the utility side of things because I got the overview. Let's go into detail and I want to see how deep this rabbit hole goes, so to speak. So fractal right on your white paper, you say this is the core of everything, right? It's a data engine, what's a data engine?
Speaker 3
I think I'm not gonna dive really too deep into details, both not to share our secret sauce and not to make people bored, but basically what it does, it does two things, data ingestion and data retrieval. So injection ingestion part means that we. Fake. The data of multiple kinds. Yeah. In our case, these are this could be like just some news about crypto projects or yes, some threats on fortune or token data. And both like security audits and how they perform on decks. And the. Converts them into some unified way. That is most convenient for a language model to operate, and then what it does is to checks all the other data points that we already have and the searches for possible. Actions and constructs some special kind of knowledge graph, so like it, it's not just saving some specific data points, it automatically computes how these data points is related to the data that we already have in our knowledge base in our knowledge graph. And this process is compute intensive. That's why we have like a hybrid cloud solution at the moment. So we host some of the stuff like locally on our GPU server and some things we first on the cloud. But since we already pre computed this. Data the second-half the retrieval it's extremely effective because if you don't do this part and the main knowledge. No one really does this like ChatGPT, etcetera. They don't do this to many various reasons. In case of charge, PTA or Bing search simply because if they wanted to do this they would. They would need to like cover all the possible data because people ask charge PPT, all kinds of stuff. We have a really narrow. Niche for us, that's defy and a little bit of threat Phi that that is directly connected to the D Phi. We compute our knowledge graph just for these parts and when the retrieval. Basically when the user ask finally a question to our bot or request a report when we need to retrieve the data. Since we have precomputed everything we could retrieve much more information with already. Computed relations between these various documents and provide much more. Initial insights, initial information to the bottom. And when it gets like all this retrieval data retrieve data, it writes a an answer to the user of course and. More like more initial information means more accurate, more detailed answer, and the better answers means like more accurate decisions made by user and more I'll find hopefully more or more well for the user of our.
Speaker
Apps.
Speaker 1
So what you're saying is it's essentially it's not going quite into quantum computation, but it takes the same basic principle, which is evolution, which is evolution over time. Essentially you have a multi layered code base which allows you to take put call orders from from individuals who are entering a query. And because of the way that your system operates, it has a training function attached. So the more people that are using it, the better the AI becomes at directing them to appropriate alpha, yes.
Speaker 3
Uh. Roughly speaking. Yeah. Yeah. So so for example, yeah, if when you when you for example create a custom report with with some specific topic we save like all the additional insights that were generated while answering it and then like the next user who asks some question, he will get all this insights that. We generated during the previous one.
Speaker 1
Fantastic. And this is the core of a lot of your other utilities, right? You could almost call this the engine that powers it all. I think that's actually quite a unique structure for a product because you essentially have one core product, which is the fractal, the well, the alpha cat fractal. And you also have all of these offshoot. Products that have different UIS drawing from that single database.
Speaker 3
Well, another like crucial part is the unbiased model itself. We like verify training, benchmark. It's on our proprietary data set to check that well, you know there are like a lot of. News Online about some mainstream models being like left wing or right wing or like whatever. So they will, they will ask about like some political questions and. Each model gives some not so centric answers in each case, but the problem is it also works like for financial data tool. For financial related reasoning and. Yeah, that's the second key part. Once we have a data engine, we also need an AI model that could operate with this data flawless, sly, without being, like, too risky or too shy of making a decision. Or like leaning left or leaning. Right. We it, it should always be the user. That's our key thing. Some people use it, use them. Use this fact just as, like, uncensored model that could reply to any question. But yeah, but that's just a by product of our tag. The key is that it should be unbiased and. Model users queries.
Speaker 1
I think that's also a USB for what you're developing right at the end of the day, and you're right, a lot of AI models tend to be biased and it's, uh, it's typically creator bias at the end of the day, right? You will, you will get some filtered down based on people's own personal views. You know, sort of like the creators of the created kind of thing takes. On aspects of. It because we we love seeing things. Of ourselves in really anything. And it's the same for AI. So an unbiased model is actually really strong as a selling point because it means that you're not leaning towards a political party. A particular set of beliefs are really anything you're centric and that makes you reliable for financial information, whereas other AI's. Which people may or may not use up to. This day aren't. And you know that that in itself is a very, very good selling point to actually push forward. Especially in the market, especially in the market that cares less about what views you have and more about how much money can I make in a short time as possible. That's D5 for you at the end of the day. So now that we've talked about the bias factor, I actually have some questions on this specifically related to the unbiased AI. That so how did you train the AI chat model to be unbiased, and how was that process for you?
Speaker 3
Right. So there are multiple approaches to this task. The simplest one that we. So yeah, the first step is to take some open source model. So of course we don't, we don't train anything from scratch. So basically we take open source models and then try to. Make them as neutral as possible. In our case, we currently host two different models for our. Telegram chat and for the fractal itself, both are are based on Mistral models by Mistral AI startup, who used to post open source models. Now they don't, but we likely still have some stuff from them so. We have a lightweight. Model with 7 billion parameters that works for the. That it's blazingly fast and quite cheap to surf for like hundreds of thousands of users if needed, and it works with already pre computers data from the fractal, so it doesn't need to be. Extremely advanced in terms of logical reasoning. And for the earthquake factor, we have a heavyweight mixed role model, which is like a mixture of experts. I don't know if anyone follows it. You know, if you don't, whatever. It's not so important. So basically we have a lightweight and heavyweight model. Initially they are taken from Mr. Models. And after that we do the following thing. We take an instruction, fine-tuned data sets. From or? Yeah, instruction data sets from the open source sources basically like for example from Hagen Face. And we do the following thing. We check the question and answers connect listed there. To see if this data is neutral or not, and if we see that this data has like. Some kind of bias. We remove this from the data set and we do these three with multiple data sets and then combine them together. And fine tune the. Initial foundational model that we use. After that after like after this step is. Uh fifth, we have also our like special data sets that consist of. Crypto related questions that we like automatically ask our model and. It should answer every of these questions, and then we check the result and this is the part that I'm trying to do both automatically and manually for the extra sanity check to ensure that the replies that they are that we are getting, they are they. Really neutral because for example, an issue that we had last week, Oh no, it's Monday, it's. A week and a. Half ago. Is that we tried to upgrade our smaller model config and use. I don't know if if you've heard about this well, whatever. So basically I've tried to merge our like current model with the model that has that was trained on the math problems. And my initial reasoning. That if you merge like the crypto model with math model, nothing bad will happen. And but when I did this my deal was to improve the math reasoning for our headboard. It turned out that the original math model was not neutered before train on math. And it still has its. Left bias. So when I merged this model models together, I saw that we like actually got a better answers for math related questions better like logical reasoning for math. But at the same time model stops answering some specific like questions regarding like high risk. Or some stuff that the original model thought is unethical. So this is kind of the stuff that we are working with. And when I talk about all this stuff, you could think that it's kind of overcomplicated to your your, whatever. But one of the things while we are doing this is because you know everyone every like major AI player is getting more and more closed. So like, as I've said, like miss open the eyes, the first we stopped making their models open source. They're not open anymore. Basically, Mr. Was publishing open models until recently. They were acquired by Microsoft and I have like internal connections and that meta I and I'm pretty sure that Llama three will come this summer and it will be open. Source. And the llama. Three will come like that this winter, and it will most likely also be open source. What happens next? We don't know. Because like training like foundational language models become becomes like more and more expensive. You need 10s of millions, hundreds of millions, even billions of dollars to do this stuff. And our plan is to accumulate high quality data, like as much as possible. So to be prepared for when we like kind of final, probably last. Open source, like large lamps, will come out, and of course they will have bias, but we will be able to fine tune them. We won't have to tune them from scratch. So the idea is to. Basically what we do right now with all this data preparation is still kind of testing and the idea is that I think we will reach the point when really open source lands like really advanced one once will extinct most likely. And we should be really prepared for this point to be able to stay like independent in this regard. Yeah, this is. I hope, I hope it wasn't like too early. Are you guys OK?
Speaker 1
No, no, nerdy is good, man. Honestly, I I appreciate the passion that you've created this with. It is a complex model and you know, it's. Here's the thing, right? Everyone can create a good project, right? They just need the right resources. They need the right time and they need the right people. The problem is not. Everyone can create. A good product for that for that project. You see mean coins with nothing. All the time. And this is proof in the pudding. But when you actually have someone who's dedicated to creating a product that not only works but also functions as intended, and that can actually scale up like this. And grows itself and has a dedicated team behind it. It's the combination for something that has long lasting sustainability. So that's the training process of the AI chat model. So tell me a little bit more about this sort of. The evolving nature of language here. Right, because this is 1. Of those things that I find are always going to be difficult to implement into AI because every everyone's finding, and this isn't to to offend anyone or anything, but everyone's finding something or someone to be mad at. That's just that's just how it works nowadays. So, umm, given the evolving nature of like language, cultural shifts, especially in the long term, you know what's OK Now will be completely and contrastingly different from what's OK 50 years from now. So how do you how do you actually update and refine the model to maintain that neutrality and that adaptability across a lot of different users with a lot of different opinions?
Speaker 3
Well, as I've mentioned, when we say unbiased, we only mean the financial part. So there is this probably should put a disclaimer, but yeah, our model is uncensored and it's trained to be unbiased when it comes to financial decisions. Everything else except when you ordered a customer. Yeah, for us, this is another scenario and we will do like everything tailored for your request. In all other case you will do at your risk. So like right now if you will ask like really harsh and. Offensive quest. Most likely you will get a really offensive response. So like I saw people like in our group chat where you could like test with both for free without being an A talking holder asked why you are dumb bot and the bot replied. You're probably trolling instead of doing childish things, concentrate like on maximizing your wealth or something like that. And.
Speaker
Right.
Speaker 3
Yeah. So that's what I'm programming. Is called like undefined behavior. Most likely it will work, but since it's our out of our scope. We won't be guaranteeing anything because, like the purpose of the both at the moment is financial stuff.
Speaker 1
Ohh definitely get a disclaimer in front of the bot then for sure, especially if it's dealing with financial stuff and I do appreciate that you've tried to make it as even as possible for what it is. Right now, and obviously that's going to only improve in the future as you continue to develop it. OK. I'd like to move on to the next utility that you had, which is the crypto reports, right, the alpha thing, this was interesting because you mentioned earlier you're actually getting close to alpha. What's the development process? That actually look like and what do you define as alpha in a space that moves so quickly?
Speaker 3
I think we have. Two definitions of alpha and the first was covered by you and the I think is to basically detect some emerging trends, some something launching like extreme, extremely new tokens that are promising, etc etc. So yeah, basically I would call alpha. Uh. Alpha covers like 2 scenarios. Yeah, the first one is to. Yeah, like in margin call movie be the first one be smartest one or cheat. So we can't cheat. So we need to be the fastest or other smartest. Yeah. In terms of being first being the fastest guy. Yeah. I think you've covered this. So basically we have already a lot of sniping bots and I even saw someone in the Tortuga chat or they're asking us whether we want to. So yeah, Alan was asking about or do we want to? Add some buying site and functionality to our bot and the question is of course yes, and that's one of the definitions of alpha. So basically like not just capture trend but perform an underlying data analysis and ensure that it's the thing that's not just training but worth. Western Inter and we'll add this functionality when we will be sure like from the data perspective that we are good enough to provide this to users and. Reports is the first step towards that. So yeah, what reporters do, they're basically some kind of. Alerts or TLDR's on some specific market trends that that trend tracking fundamentals of the market basically, so for example. If you want just to get a rundown of what happened to many coins, fear in the last like 12 or 24 hours or you have like some very specific. For example, for B2B questions about your own, like a system, what the report functionality do is basically performs. Yeah, deep a deep search across websites tries to do various search terms, check and use, see how they compare to each other, whether they are. Any contradictions, etcetera? If there's that's about like some specific talk in the market, check their market data, then prove. Rundown that's. What what I would call another definition of alpha, I think, but we would need to check the data. So basically is to take a look at the fundamentals and try to detect or or even better predict when we see some specific trends. Reverse for this case. We already have like visualized commands in. Bought which you can use to. Track. To track any specific market narrative. So right now it all it does is just return some really simple charts. So for example, if you use AI tokens or Bitcoin originals or Solana or some specific token. Or whatever. It will check what we have basically in our like in alpha, extract all what data we have and then we'll print you the sentiment and the sentiment trend and that's the first step. Is to track fundamentals like the versus U and alpha. The second step would be is to providing just alerts, personalized alerts to to the users and collect the data and. The feedback from them on how's it going and use this feedback. To improve our models and only after that we will probably add some sniping functionalities because of course there are some sniping bots already that just like take the training bots from deck screen, check if the talking is OK and show it to user, but we can't. Like measure, the effectiveness of such things, and our idea is when we will push such a feature. We will be able to quantify its effectiveness.
Speaker 1
I suppose it's all about foundations first, right? The way you're build at the moment you need to, you need to have a robust ecosystem that you're focused on and you know about before you start expanding and trying to try out new ventures in order to sort of plug and play into alpha cat. And I think that's a really good approach because it means that you're not straying too far from what your knowledge. Like. Until you know that your foundation is rock solid, in which case, yeah, fine. Let's do some exploratory and let's try and integrate all these other things for quality of life, which I think is a really, really good way to approach it, especially when you're trying to build something long term. Because if the foundations are solid, then everything just collapses in on you. OK, so.
Speaker 2
Can you catch that? Yes. Yeah. Excuse me. Yes. So in other words, we've taken the road less traveled, which is like a long, winding Rd. through development. You know, creating these ISP's, like you mentioned, the unbiased.
Speaker 1
Absolutely. You can go ahead.
Speaker 2
Enable, you know factual knowledge graph, just you know massive amounts of data sources. And yeah, there's a bunch of other functions that are starting to emerge because of the output. So you know the the sentiment analysis that you can run inside the Telegram group on the data visualization and then integrating, you know, on chain data that will actually require. Another piece of tech, you know, we're code naming it Alpha AGI. Because it's it's a complex multi step approach to getting the on chain data and structuring it. You know for for. For the data set you know, whereas if you look at other you know AI projects especially you know some new ones in the space, they have some hype and they're really just launching, you know, alpha bots or alert. Bots or sniping bots and I think what they have is good product market fit because they have geared themselves towards.
Speaker
Just.
Speaker 2
Understanding their community and what the users want, and it's true, you know, people want alpha bots. They want to be curated. What to do and what to buy with as little you know, quick as possible. And it's sort of like creating guardrails. For the output so you know our advantage however is that because we've done the hard work and we have, you know higher and higher quality output, now it's time to productize. So you can actually start templating some of this output. And yeah, you are seeing the the token audit, you know, functionality the, the data visualizer functionality, sentiment, sentiment analysis. Functionality is just happening organically, but you know at a certain point we could just take a look at these existing popular Siping products, alert products and just replicate them, but do them better and add not only the quantitative data that they're providing, which is, you know, price market cap volume, you know momentum but qualitative as well, we can start adding. Sentiment analysis. A bullishness score. A short summary you know, across all these different data points, even obscure, you know, blogs and things like that so. So that is a priority for us because now we're going to market. So you know if if if your community. Has. Feedback. If they want to provide examples of of bots or methodologies that they're using that they would love to productize, and yeah, we're definitely open to it, because creating telegram about is pretty easy, right? It's really just the infrastructure and the engine behind it. That's. That's the hard part, but that's mostly done.
Speaker 1
All right. Fantastic and thank. You for clarifying, I always. Appreciate a bit more detail with these things so bots non house bias. Yeah, that's fine. Let's talk about the paid AI apps because this was what interested me about. The model because as you said, you know pale AI's are popularized. The white label solution when it comes to this, you know it's effective and you know it works, which means there's a proven case study for actually scaling it up. What makes you guys different and what are you offering?
Speaker 3
OK, I could start and probably being you could continue from the product side, I could start just from. The tech, yeah. So probably I'm repeating the word fractal way too often, but since it's our core, it is what it is. Yeah, since we can basically process.
Speaker 2
Take a shot.
Speaker 3
More data and have more data in. Our knowledge graph. They are better in this very specific regards. If you are really small. So yeah, we have like 2 differences. First of all, if you are some really new product, for example me talking like some dank meme was our first customer, I bought what we can can offer. For you is, since our model uncensored, as I've said. The default board that we offer to every talking holder is fine-tuned to be as neutral as possible when it comes to financial decisions. If you are working, for example, on some. Very specific product where you don't want it to be behave like that. For example, if you want the boat to be to to resemble for example sandbank and free and Caroline as close as possible and ideally acting like provocative a little bit. This is the thing that.
Speaker
No.
Speaker 3
Charge PT could refuse to do because like it could say it's unethical or whatever, and in our case we could like for smaller projects that don't have like a lot a lot of data and only requires some deep customization we can like provide this degree like any kind of degree. Whatever crazy stuff. You would like to we will do this, but another thing for larger scales, larger scale project that I have briefly mentioned is that. If you have a really lot of documentation, for example multiple websites like we did for you guys, multiple websites that covers years of data, or various articles about your ecosystem, about your project documentation, etcetera, etcetera, that's the thing that other platforms. Simply won't be able to handle. Properly and the we can do this fairly easy. So every guys basically were the first ones and we tested all this stuff on them to ensure that everything works smoothly and now like the whole process is streamlined. And now when you have like. Hundreds of documents and they are spanning across multiple years and cover all the different topics. That's not. No. There there is. No difficulties for our tech to work with that. Yeah, so. Basically this summer. Yeah, very little amount of data, but once some. You can also use like some deep customization customization and offer like crazy behavior for your bot. That will make your project to stand out in a way that like other yeah, projects can't. And another way if you. Are. Really huge established project and we have a really lot of data we can offer you. Deep integration and you will be sure that will work like with your data in the most comprehensive way possible. And another craziest thing that we are like just discussing at the moment. I think once we started sharing the alpha, you could have this, that just maybe everyone who's listening to put. Is interested in this stuff as well one. Or some stuff that I would like to do is to in the future is to work on coding assistance, coding assistance for. Greater projects. So basically, for example, if you are not working, if your project is not on Ethereum, and if you're not working with Solidity, if you're using some more niche and obscure technology, most likely the existing EA tools won't be able to provide decent development development tooling. So we have. Like GitHub copilot. Which is now on on industry standard and my primary job is like is jet brains and like we are developing stuff for developers for many many. Years and we have our own AI systems and every major players covering major languages, Python, Java, C + +, C# etc etc. If you are working first with Solidity, even there you have some issues because quality is still. Quickly evolving language and like young models, since they have knowledge cut offs, they can't really. Like for example, be aware of the latest vulnerabilities, called standards etcetera, etcetera. You will have to retrain them and if you're working with some like really new stuff. So for example you have every guys guys have like their own like specific technology and a lot of other projects like Cardano, they all have. There are specific languages related tasks. You won't be able to use all the latest AI tech to assist your developers or to own both the new ones and boost your ecosystem. And that's the thing that we hopefully will start targeting by the end of the year. Yeah, and. Yeah, that's probably from my side and maybe we could say something just from. The product perspective.
Speaker 1
Yeah, sure.
Speaker
Yeah.
Speaker 2
Yeah. Gonna you mentioned, pal, AI and how we compare and you know what are some ways that we're better? I think Palai they.
Speaker
You know, sort of.
Speaker 2
Sort of paved the way for creating custom AI bots in this space in web three and in my opinion that's very, you know, it's it's it's good work. It's low hanging fruit because. Will need it right? So communities, projects, ecosystems they have, they're managing these. And they need a good community engagement tool, right? Just for customer service for, for educating people on the product. And it's also like a conversational assistant for all these people. They might use it for generating, you know, content like Polish social media posts and things like that. And I was in early. The holder of this Solana meme coin, it's called SBF, and I basically advocated for deploying a an SBF like Sam banking freed, Persona bot as well as Caroline Ellison into that group. So I. Brokered a deal with AI. To create.
Speaker
The.
Speaker 2
To to to to step backwards, I actually try to use pal, so the first one I used was. I just deployed the free PAL version and then I looked at their some of the white label solutions and what I found was the output of Sam was OK. It was kind of more generic that you would expect from a like a ChatGPT. You couldn't definitely ask like, you know, like silly or provocative questions. It just wouldn't provide that kind of output, and you couldn't really use it for crypto research. So we had, you know, there were community members in the group that were asking, hey, how do I ask about this particular token? You know, why is with trending? Why is with, you know, what are the upcoming catalysts? So. Even though it was doing an OK job with a custom knowledge base, you know on PAL to answer basic questions about the project, it wasn't really able to give. You know comprehensive answers as well as research questions. Yeah. That's why I reached out to Vladimir to create a custom version. And yeah, basically create a custom persona for both SBF as well as Caroline. And yeah, the community has been really enjoying that they've been creating, like really edgy content memes for posting on online and also research. Right. It's it's. It's pulling from deck data from deck Screener, coingecko and and you know pretty soon on chain data. So yeah, I think that was a necessary step for us for, for, for, for SBF to create much better output for the community. And that was a proof of concept, which we can easily scale through through white labeling.
Speaker 1
And I think that's a really cool use case. I mean, that's a really good example of what you can do with your data set that you have now. And I think the point that Vladimir made you know. You don't have to train a model, you don't have to collect all these data sets that already exist within fractal. Now you've got clients coming to you trying to build their own. And all the all the time. You're always growing. The bigger the project, the bigger your data set grows, the more financial data you can draw from. And the more information that you have access to and I think in a world where zeros and ones actually have running value, the guy who has the most information is king. So definitely approve of the sort of mastermind plan there because I think that is the way that a lot of projects, especially AI should be going. It's about data collection and it's about the growth of growth of the industry through proper logistical analysis. But that comes a bit later, so let's go into your revenue model actually because you mentioned you have $100 tier, can you actually tell us what's in that and then tell us what's in the $1000 tier that comes after?
Speaker 3
Yeah, yeah, of course. So, yeah. First of all, if you simply want to, like, take a look at our book now, like official Alpha, CAC, under score chat, you could always try it with some limitations, but still vote. Holding the token allows you to do so. First of all, yeah, the first tier tier when you. You have. 100 bucks worth of AI. CAG. You get the access to our AI apps to the web chat. You can see our auto report data that we generate like every few hours using like the current trending topics. And you can. Once you have logged in with your meta mask on our website, you could also like type A login command and our telegram bolt and it will forward to your back to our website. You will pay your your Telegram account to your wallet and since then our boss will know that your holder. And the you will get the access to use the both in your DMS or in any other chat and. So that's the first. Yeah, that's probably the first benefit that people use the most is to be able to basically chat with the both, use its functions in in, in private, not letting everyone know what you are looking for and what you're interested in. And of course, aside from the chat, you could also use. The reports and visualization stuff and the summaries, but that's just the tip of the iceberg. So the second tier is when you have at least or $1000 worth of a CAC is what we call like premium premium too at the moment and it gives you some extra features. So first of all, when you generate a report or open someone else's. Report, for example, are one of our auto generated ones. You will get more detailed rundown on each of the sources you'll get there, for example bullishness and legitimacy data that we take, like by analyzing the text. So of course each report contains like a lot of up to 10s of various sources depending on the question you are. Ask him and you could not just. Check the answer but also get a really detailed analysis of where the data came from and you can verify and get a TLDR of every sources just by like hovering your mouse and for example you'll get also a lot of other features in the box as well so. If you use visualizations you will get like longer context like a longer history frame for market sentiment data. If you're using summaries. Also summary generated you will use like. More messages will include more more messages and. All the future when we will start like rolling out more features that I think we'll discuss in a few minutes. Of course all the premium tier holders will be shortlisted for the beta tested first.
Speaker 1
OK, fantastic. So now I've got to ask about the extent stuff, because this is always the interesting bit. So in terms of privacy and quality control, right, because this is a big thing with AI, we want our AI to be able to take our personal information, but really do nothing with it. So what kind of security? Measures you have in place to safeguard individual user data and ensure, say, the privacy of transactions with AI applications, for example. So if you're sending money over blockchain, how do you ensure that's not? Tied to a particular. Particularly in light of the regulations about the AI space, data farming, and potentially some privacy violations as well.
Speaker 3
We all actually have masters in cyber security, so initially what I was studying is like AI models for cyber security and while like studying the university I have realized that I like yeah in general much more than cyber SEC. But I still like remember I think also and I still have. Like paranoia fall from like all these years of lectures and the first thing that we do. This to we minimize any data that we have at at all. So like if it's possible not to save this data like anywhere and remove it immediately after the transaction is over, we do. This is for example one of the reason why we. Never train on your inputs and outputs. Some people, sometimes they ask. Can you add the like like these like buttons for the outputs like chat does. So for example if you if I don't like this output I would like to. Yeah. Like Mark, it somehow and at least like for now we decided not to do this at all because doing this means that we save this data somewhere and we prefer not to do this at all. Maybe in the future for some like private beta testing, for example, when we roll out new features. We will enable this stuff just for the sake of data. So like early users could evaluate it. But. Right now, we purposely like on purpose, don't save this stuff anywhere. So. There are still some things that we save just for example. Telegram IDs associated with wallets. But this data is encrypted stored in cloud like basically in Google Cloud and certified. This SoC 2 protocol and even I sometimes having like hard time accessing all this data because you have multi factor authorization etcetera etcetera, probably at some time at some point of time we will try to implement. Even like more strict security measures. So for example, of course, all this zero knowledge stuff is emerging and I keep an eye on like 0 knowledge and ML, etcetera, etcetera and it's. An amazing. Idea basically to have like 0 knowledge among general but for now it's related to. Huge computational overhead, so large language models, they are really memory intensive and wrapping all this stuff in in ZK algorithms makes them unusable unusable. So yeah, the answer is that we are trying not to save everything. At all those little pieces of data that we save are encrypted and stored in the cloud. Like we just like certified with different security like certificates and protocols and. In the future, we'll hope to also drop all this stuff in zero knowledge data, so we won't know at all. Even if you want. What are you doing with our now? Because. It's important that you when you work with. Financial stuff. And of course, like for, for really huge big big clients, you could probably even try to do some on premises stuff when we don't have any access to your like service at all and you're running everything on on the your own. But you need to be a really big client to for this I suppose.
Speaker 1
Yeah, absolutely. Fantastic. Thank you for the answer. I really appreciate it actually and it's good to know that there is internal and external protection and security and it's nice to know a guy who has a masters in cyber security is managing the defense of our private data which is quite nice. So with the introduction of custom AI apps and integrations, things like that, right? How do you plan to maintain quality control and prevent potential misuse of your products by third party developers? Or, you know, other users who want something for a specific purpose for that purpose is either inappropriate or highly illegal.
Speaker 3
Yeah, that's the tricky part. So I I got personal like feedback and warnings in my DM's from like talking holders regarding Paul and because there were like histories of them retreating. Some people and making both for them and they these projects that turned out to be rock pools and we will of course we will have like some in case of B2B we will have background. Checks. And. For all the other cases, we are still figuring out what the best policy. Would be. And I think ideally it would be something like by default the bot will try to limit questions that are not related to financial advice. And if you want to unlock it, you should basically like. Agree on some specific terms, probably like sign it with your probably sign a transaction to verify that you have read and accepted or something like that. So it's a really tricky thing, especially given that in any country there is. There is there is its own definition of what's allowed and what's not, and European Union is getting more and more strict strict. This, and we actually really prepared, because our our ******* fractal contains European regulation on the ENCRYPTOR in its mind. So you sometimes I ask alpha questions to to to get like an advice on this so. The idea is yet to add some limiting measures. And that's for me to see and for B2B, I think we'll be deciding like individually. So whether they want to like control this stuff themselves, what degree of control they have. So for example, in case of SBF bot, there is a limitation that like basically be provocative and naughty. But don't do any not safe for work stuff. So for example, since we have a really high degree of control of these unbiased and censored models, so like uncensored, doesn't really mean that it will always reply. Like bad stuff like adult stuff. It means that both will do whatever you will sell. You will train it to do and the for example in case of the SBF bot and Caroline Bot, that means that the bot won't ignore when the users like write some really not safe. Work like questions to the. Not, but the bot will try to answer with humor and stay more or less polite, so it will be suggestive it. It will be like slightly trolling the user who asked this question, but it will still be safe work. And I think that scenario that most of the BB. Partners will probably want to stick to.
Speaker 1
Very quickly touched on the nature of the EU, actually, and this is something that's interesting because very recently back in February they passed down rules for use of AI tools and software, right? This was in terms of what they can be used for, what levels they can be used for and where I would be banned. And you're touching on the upper echelons of that. The fight with the financial information part, or at least you are in part because what they don't want to do is have an AI controlled open market. So would you say that this is going to be a potential obstacle for adoption for you in the EU or would you say that there is room to navigate and? You're looking into your options.
Speaker 3
I think well at right at the moment they limit like for example, medical advice, not the financial. So hopefully like at least for now we are safe. But I am preparing for the worst and they want to be some guard rails to be implemented. And make the all the systems transparent. And in case we will have to do this to be able to provide the service within the you. Most likely. So we will see I think like first and foremost it will depend on how expensive would it be to implement because the biggest concern about like all this a regulation is not just that the is being regulated but because like on one hand. Implementing all the security measures is expensive. It's time consuming and time is money. And on the other hand, it will limit the user base one way or another because. You will limit some use cases. Some people decide they want new this and you will lose revenue and 1st like if they will introduce some limitations. Just for finance advice we will take we will try to estimate how expensive would it be to. Implement the guardrails they want, and if you'll decide to do this one thing we will. Do for sure in this case is to make absolutely transparent for the user. What exactly did we do with all this guard rail system? Because what we have like with all the others like cloth, GPT etc etc is that. They just politely refuse and if we will be limited, I hope not. But if who knows what? What happens like in the coming years. We will give a user like a detailed rundown what was censored, how and why, to give them like a full understanding so even. If the answer is somehow sensors, we should still provide the information as much information as possible, including the information of like what sensory mechanisms were applied and how, and to what degree, and to what extent and why. Yeah, that's. Kind of divorce scenario. I hope they won't do this, but we'll see.
Speaker 1
Yeah. No, that's absolutely fine. So, uh, let's take a look at marketing produce procedures for you guys, right? Uh, so this is a question for you, Vinny. Uh, marketing and development. So talk, talk to me. Uh, what marketing procedures have you been taking up to this point in order to get the product and the token out to the world and how effective have they been?
Speaker 2
Yeah, we, we're just starting a marketing campaign. So again, we mentioned this our first AMA, we're creating a lot of content that will help conversion rates. You know, foundational content. We just launched a new website. So feel free to. Go to alpha tech dot AI. We're still building a lot of content, but yeah, we want to have just. Higher conversion rates, so you know landing pages, a cleaner home page iterating on messaging more video footage. So we have a lot more raw data of the actual app happening. So it's a bit clear what our offer is both B2C and B2B. We are in talks with more ecosystems, more blockchains and we are planning to apply for grants. I have successfully secure grants in the past from a couple of blockchains, moonbeam near Polygon to name a few, so I have some good relationships with these blockchains. I still talk to a lot of them, even if we didn't get grants from them in the.
Speaker
1st.
Speaker 2
And I think the integration strategy because you know these blockchains, they want to hear how you integrate with, you know their data or their their chain. I think an integration strategy for an AI project is. Lot different, maybe easier than a defect project, because really it's just data. So if we could just integrate some of the on chain data, it's really based on what they want. So yeah, there's a bit of a negotiation that happens like what is the bare minimum for us to partner, you know, get a grant, get some support from these ecosystems. So that's a big priority. We have a long list of, you know, of of ecosystems and blockchains that we're. Talking to as well as reaching out to. As far as yeah, the the go to market for, for BTC, it's sort of what we mentioned earlier. Right now the app is kind of this open canvas that many people could use in different ways. And yeah, a lot of more just using in many different ways. You know I for one one. You know when I was creating content for my RL track firework I was creating you know blog. Articles about specific topics RWA, for example metals. Alternative. Apps. And what I found was I was like an early tweeter of. I was. I was explaining over this long Twitter thread when the app first launched on how to use it, and I realize ohh we have to create a lot of curation and a lot of instructions for people to to use these in different ways. So why don't we just enforce that, implement that through bots, right, that just curate that specific use case. Out-of-the-box. So that's something we're collecting these different use cases, different templates, and we're going to, you know, quickly move towards creating these alert bots and alpha bots that you know people. Are asking for.
Speaker 1
OK, fantastic.
Speaker 2
And the distribution and distribution be easy to telegram.
Speaker 1
Yeah, distribution should. Be easy. Telegram. You've got the telegram bot that integrates the. Chat AI as well as the crypto reporting right? So it's really easy easy to access from your phone. OK, wonderful. So.
Speaker 2
That's correct.
Speaker 1
I want to what's your target demographic for the product at this stage? So you've just started your marketing, you're pushing forward for the first time, what's the target demographic that you're looking forward to enter your community and what are you looking for in terms of people actually being there for conversion? Because it's one thing? To convert a person. It's another entire. Thing to keep them there.
Speaker 2
Yeah. For the B2C, I would say our target market before you know, we grew out of a very sort of, I guess underground, very niche community which was biz. So our initial community, probably our first hundred members and users were from that community and now we're branching out to more, I guess the mainstream audience. But definitely still power users, people. To you know, love AI. They like to experiment. They like the tech, and they're finding their own ways to to use it on the B to B side. I guess it's really depending on the biggest ecosystems that are often grants and often support and that are willing to to, to partner. We'll go with those first. And there's some hype chains out there. You know, we're looking at base and I really like the, you know, the the base L2 network have been pretty active on there as as a, I guess, an investor and trader got some early bags on some of their mean coins.
Speaker
I.
Speaker 2
Know some other yeah management staff. So yeah, I'll probably reach out to some of these hyped, you know, chains first.
Speaker 1
And I suppose there's the OK well front to. Talk about is there anybody that's doing an? Ambassador program for Alpha Cat at the moment.
Speaker 2
We have some of the pipeline, yeah, we have a marketing advisor slash agent and they have the massive network and they have some of the biggest KO walls in the space on speed dial and. Right now we are doing more of an organic push. We're trying to win hearts and minds through Twitter and we are filtering a lot of AI related posts in Web three and we're accumulating a list of some of these KO's that are interested in in AI. Yeah, it's it's a trending topic. You know, AI projects, AI tokens. But yeah, we have access to a lot of these major care wells. On speed dial.
Speaker 1
OK, fantastic. So there are plenty of opportunities for a potential catalyst. To increase the price. Let's talk about tokenomics a little bit right there. First off, how many tokens are there?
Speaker 3
I think I could follow on this one. So yeah, we have 256,000,000 tokens total and we have burned 1111 millions and 1/2. Of them already? Yes. So we haven't touched economics really yet. We have 4% tax on buying and sell transactions. And as we decided on our voting, we use 33% of this task and. Of all of our revenue, basically, so from B2B2 to buy our tokens back and burn them. So we have burned over 4% of our total supply so far and we will stick to the plan and we'll keep burning more and more basically making the talking deflationary.
Speaker 1
OK. And this is something we should have touched upon in the revenue model section actually, but tokenomics fits as well. You have a buyback and burn mechanism, right, based on your revenue. What does that look like and? Why did you? Implement that.
Speaker 3
Yeah. So. First of all, initially back on Polygon, this is thought that we will try to just use 1% fee on like unisoft trades and use them to sustain our infrastructure and for maybe some possible premium products. We will ask some additional token fee using the fact that Polygon has. Transactions or like free transactions transactions at all. And when we switch to etherium, we had to deal with. Yeah, with the fact that. Asking the the user to make like frequent transactions is probably a bad idea because it will cost a lot of gas and it's probably better to have the the like one time fee instead and. One like a higher one time fee could be probably like perceptive as another good thing from the user perspective. If you will just take 4% so and at the same time it would be nice to incentivize the user to. Invest in us. And basically that's why we chose this plan. So on one hand, when you buy auto, can you get the access to the apps and the on the other hand, as we accumulate more revenue both from B to C and B2B and we do more and more buy back and. Room and our talking will get more and more scars and therefore more expensive hopefully.
Speaker 1
Wouldn't that present an issue when it comes to new user adoption at a later stage though this? One's for. Vinny, right. So if the token price is climbing higher and higher and there is more of a barrier to entry and those who've got in early or have essentially paid 1000, but their investment has grown, wouldn't that actually disadvantaged people? You're coming in if you've based it off dollar value rather than number of tokens held.
Speaker 3
Yeah. We discussed that internally a lot and yeah, right now the user tiers are bound to the dollar value, not the like some specific token price and that's yeah, another thing if you are like really early token buyer that means that. Uh, your like basically user tier will grow automatically without any uh? Extra efforts from your side. So for example, if you chose to get a basic tier but you get them, got them early, that means that over time, if you'll like grow high enough, you will get to a premium tier. So that's. So on one hand, newer users still have the same barrier of entry in terms of value, but on the other hand, like our early users will get like. A A bonus for like trusting us on our early stages and they will get the more premium tiers. For cheaper and probably if you would like develop some. Really cool stuff in the future like some a features that will be extremely compute intensive and I think once we will grow in terms of hardware that could be true, we will have to introduce one more here or some paid like services for some. I won't dive deep on the details because I'm still figuring out how to look like, but I think at one point, yeah, we'll offer some services that are really compute intensive that require like a lot of resources and to access them you you will need to be like really high tier holder or you should like buy them. For tokens. Yeah, we'll figure it out.
Speaker
OK.
Speaker 1
No worries at all. So yeah, you know, teething the shoes, right? You're marketing for the first time and you're going to receive a lot of feedback like this. I think that's going to be really helpful in developing. Your process hopefully. You'll find a way to do it. I mean, my suggestion would have just been to tie it to a stable coin. And then utilize that stable coin payment to actually buy back into your own token so that you can sort of mitigate any price impact and volatility that you would otherwise experience with crypto. But yeah, you, you take whatever method you want to ask because, well, that's the end of my questions. So there have been a couple of questions in chat that I do want to cover. Was sent to me in a DM, so this one was asked. So we talked about finding alpha using the crypto reporting bot as well as the search chat function. In the in the unbiased chat bot. Now what I want to ask is what he wants to ask is about the reverse. Can it help to reduce or eliminate scammers by highlighting contracts that have a bad history behind them?
Speaker 3
Yeah, yeah, actually. That was one of the things that I was experimenting before starting alpha keg as we know it. So initially I was just exploring like analyzing the raw hidden data transactions and trying to build like graphs to find. Yeah, scanners and lock rock pullers. And unless some contracts data on transaction level granularity, but back then I basically turned out that it's cheaper and easier to start with language models with than with. Unchained data analysis because if you do really advanced stuff, you will have to. Host a full node yourself. So for example, of course we have like a lot of cloud, that's probably a bit of of like nerdy stuff, but whatever. So if you want to do a really deep analysis of on chain data and figure out more than usual like. Sniffer and a lot bots could say to you. It's basically impossible to use such services as alchemy like in FURA or whatever. So basically any web free data providers because of two reasons. So the first reason is that you will need a lot of like throughput and you will need to do like really lot of transactions with low latency. So basically. Your full note and your software for analyzing like and detecting scanners should be running on the same machine, and if you don't have like the direct access to like enforce servers or alchemy and you simply can't vote in your software simply won't be quick enough to detect. Like and compute all the stuff needed and for some time I was trying to I was like doing some experiments on the test net and I was trying to basically like scan my scan myself and see like the transactions how they are going etcetera, etcetera and. Yeah, we have like a more detailed transaction analysis in our like road map a bit for the future because we will have to implement this chain by chain basically. And what we want to do now as we said is to basically copy the best. So there are a lot of like. Uh. Basic kind of stuff that we still haven't fully implemented. So as I see like the human alpha is kind of a puzzle with a lot of different pieces and we have some unique pieces that no one else yet have like has. But at the same time, there are some pieces that are like kind of like widespread. Popular among some other projects, but are still missing for us. So, for example, you're tracking like. Scammers on all the chains. For now, we are doing this only for freedom and some more advanced checks and we are going to cover this first and after that we will dive freely. Keep into analyzing on chain data and the cool thing is that unlike Web 2. But three persists like overtime. So we manually like save all the data that we grow from the Internet to like retrospective analysis for for the for the future. And the idea is that when you like finish Internet. Every low hanging fruit in our road map in terms of on chain analysis, what we will do is to. We will go back and start to check what we have in our like web two data set and what happened on chain back at that day. So if you know about some scams, if we have some specific news, etcetera, etcetera, we'll start digging back. Like months below and figuring out what accounts reacted. Was there someone who had this inside information like and reacted before it was published, etcetera, etcetera. So there there are like a lot of amazing stuff we could do and we will certainly eventually like reach this point. But yeah, right now we will just we need to finish. Taking all the low hanging fruits first.
Speaker 1
Yeah, good answer. So this is a question from Alan in the Tortuga chat. If some of your sources are known. So I think he's referring down to the reporting and the news part of your thought. If some of your sources are known, is there a risk of competing projects? Polluting that data stream in order to purposefully fund or damage an upstart competitor without any real basis.
Speaker 3
Yeah, that's a really good question, I think. That there are like multiple levels. At which we can mitigate this thing. So first of all. Soon we will release like more Interactive Data visualization visualizations for like our data sources. So like right now we have a visualized command in our bot that shows you just a plain 2D image and later on it will it will upgrade it to a like full-fledged. Like web app that will show you some stuff in three. D. And. The at this point it will be like extremely easy to find semantic duplicates. So basically for example if you use head GPT to generate some kind of Father article and if you use like similar prompts or even the same prompts to generate multiple articles of some specific nature. It's actually extremely similar to compute their similarity, so if you have like this only like works for for Fortune 3 threads and will surely work for like that are like child friends threads basically and it will work for such kinds of news. As well. So if you have like the text could be different, but if the semantics are kind of like absolutely the same down the end. If it's like the same kind of fat, it will be quite easy to detect. And of course, if some if someone decides to play like really offensive game with us, we'll find how to deal with it. It will be like I think like we will win.
Speaker 1
OK, fantastic. So some parts of this question have already been answered, so feel free to ignore those parts, but would you consider expanding to performing actions like buying tokens for clients, either doing it yourself or through integrating with some of the telegram sniper bombs?
Speaker
Yeah.
Speaker 3
Yeah, yeah, actually, yeah. I've mentioned this previous like, yeah, also questioned by Alan. Yeah, as I've said and as when you said, we will surely do this. But yeah, our idea is to provide like. Are. Measurable. Like measurably good, like advice and calls. So at least we can like implement basic sniping functionality at any moment of time, so it's quite easy to do. You just stay, talk and see if it's trending right now, see if there are some obvious threats, red flags, and if not, show the user. That's what a lot of bots currently do. Yeah, our idea is to implement is to make sure that when we post something we have some measurable data that. Insures for us that we can actually recommend this specific talking to user.
Speaker
Yeah.
Speaker 1
All right. Wonderful. So oh, Vinny, do you have something?
Speaker 2
Yeah. I just want to add 1 quick thing. Transactions will probably be later down the road map because it comes with a higher level of risk. Obviously right now it's more about democratizing callers, right? So the the KL industry, we're all getting alpha from somewhere, whether we're doing it. You know manually by, you know, deep diving into different forms and cross checking you know against on chain versus coin gecko or listen to callers who you know might have insider information, they're closer to the project. Selects the closest information, so it's sort of like another modality you know to to that that might have some crossover with. With those two methods, but it is more of a distributed, more democratized way to do that.
Speaker 1
Fantastic. So we're coming to the end of the AMA now, give or take. Is there anything from your side that you believe we've missed in the AMA or anything that we haven't covered?
Speaker 2
I think Vladimir is very humble about his past, his experience. And yeah, I noticed that we didn't really ask too much about his experience at jet brains as well as his embeddings work with open AI.
Speaker 3
Because like the couple of things about me, like in in the past and. Yeah. Again, you say that's that's, that's, that's enough. Yeah, actually if if we if we finished, I could brag a bit about. Yeah, myself. So for example speaking, I wanted to mention this for example on the previous question about data poisoning and like purposeful fat and damage. So for example, one of the things that inspired me to start start Alpha check was by previous job. I was a development lead at brands research Astroparticle Physics Lab and we do a lot of. A staff related to analyzing astrophysical data, both like raw measuring from telescope rates that are like capturing cosmic rays, and actually, I've presented this stuff on the NVIDIA conference in 2022 and the last project that I was working on. I was like diploma supervisor for one. Wala students was called like language models for multi messenger astronomy and what we did. There. Is we took like a lot of archives of emails that astrophysicists from NASA and from other labs were sending to each other. Regarding different reports since early 90s, there are like about 50,000 of such like emails from early 90s to the present time and they are all like in the public archive on the NASA's website. And we started like training. What we now know as an alpha kick fractal, so we try to figure out how to process all this information that like has uncertainty and it has typos and these are basically just emails. They they have incomplete information and try to structure it and extract. Insights from there and now it's like Ryan in production for some in some lap lap in Paris and yeah before that I was. Doing a lot of stuff with computer vision with generative AI and I was like, yeah, the video AI conference with some generative image stuff in 2021 before it became popular before stable fusion has appeared and. Yeah, physical stuff was solution. Also mentioned in the Open AI blog post. We had like a private beta access embeddings that we use so. Analyze all these messages and before that I was working on large scale code analysis. I was trying to train the very first code in assistance based on GPT 2. When it just came out. So yeah, I had some scientific papers and I have trained my first year models about nine years ago and have like 7 years of experience. Yeah, I could. I could take. I could talk like a lot about the previous stuff. Yeah. Yeah. So since I also mentioned that I have. Masters in cyber security. So yeah, during crypto classes we tried to experiment with very first versions of Solidity and I wrote my first my smart contract back. Then about six years ago and since then I was like. Investing in crypto as a hobby, not doing something really special, but I was really passionate about this due to the openness of of the data, and we see this trend happening with the web. Two right now, the moment GT4 was released a year ago. A lot of websites started like closing their data public data APIs. For example Reddit Twitter. So for example, Stack Overflow stack exchange, it's formally on paper, has like Creative Commons license, and everyone could use this data. But there is a. Trick. So previously Stack Overflow was published in the ZIP archive with all their data like every three months, so everyone can get all this data in one click. And since GT4 was released, they stopped doing this. So like right now the data is still formally open and you have like creative. Someone's license and everyone could use this if they provide attribution, but at the same time it's now like impossible to do this because you need to use the API and then your API is designed to be slow enough to prevent you from actually collecting all this data. And on the other hand.
Speaker
And.
Speaker 3
That three is still open and. We it's more open than ever and we have more and more and more data in there and it's extremely natural to me that we should push this direction, yeah.
Speaker 1
Right, fantastic. It's nice getting to know a little bit more about you. You've been around the block of it, haven't you Vladimir? Down?
Speaker 3
Yeah. So I'm. I'm. I'm almost 27 years old. Yeah, but, but yeah, I've started coding like, 13 years old, and they've built my first male models when I was 17. So I had an early start.
Speaker 1
Yeah, you could, you could say. That again at 13 years old, you were coding.
Speaker 3
Yeah, yeah.
Speaker 1
That's very impressive, man. When I was 13, I couldn't figure out what to wear for the day. Honestly, I think I think it it it it does show the level of professionalism that you're willing to take this and you know you're on 27. You know, you've got a long life ahead of you. You've got a lot to do. So I'm looking forward to seeing what you, what heights you take alpha.
Speaker 3
I still can't.
Speaker 1
Check with. So yeah, Vinny, is there anything else to add on your side?
Speaker 3
Vinny Young, you, you, you have enabled.
Speaker 2
Oh, sorry guys. Yeah, no, nothing that I I just wanted to thank you again for hosting us and yeah, we love to do it again. It was a fun chat and. I like how you went deep. Always a pleasure.
Speaker 1
Oh yeah. Oh, yeah. You know, we, we, we. Like going deep. Into our to the kind of the. Point get to. The salty depths, right? So yeah, I really appreciated having you guys over. We'll call that and to the AMA once again. Thank you so much for coming in. And thank you for having a chat with us. Before you go though, I do have one question. What's the call to action that we could do to help you guys out? Obviously, we'll post something in, in the to to group as well, just to reiterate that message. But what can we do to help grow your presence and help you really expand out into the? Web three world.
Speaker 2
Maybe recommend if you know if you're connected to any blockchains or ecosystems that could benefit from, you know, custom analytics and custom data integrations. If they were, they just want to test what happens with if their data goes into fractal, you know what kind of output would they get? Like we mentioned error. We was really surprised about how. In depth, you know, we were able to to mine some of their insights. You know, please let us know and definitely join our telegram to test out our bot test. All the new little commands. And yeah, it's our little playground. So feel free to join.
Speaker 1
OK, fantastic. There are a few people in here actually that are very well connected to blockchain, so I will recommend a couple myself since I speak with some of the founders Aptos. Have you contacted Aptos recently or are you in contact with them?
Speaker 2
Not yet, no. They're on our. List. Yeah, we did not hear of them.
Speaker 1
Ohh that case I can I can give you. Yeah, I can give you a Direct Line to found it. That's no problem anyway. Yeah. Yeah, that's what we like to do over here to to get. So yeah, we'll we'll get you in contact with them and yeah. Hopefully we get some more people joining your group trying out.
Speaker 2
Thanks, bye.
Speaker 1
Well, as you said, playing around in the playground, right? But yeah, apart from that, I'm pretty happy. Let's send this off. Thank you, everyone again for attending and obviously have a pleasant evening if you guys are interested in pushing forward and joining joining Tortuga or you're looking at a. Well, if you're looking at a joining us from our sister group, which is in the Traders Cove, please feel free to post. Please stay active in either group and who knows, you may get added to Tortuga one day, which is our exclusive private. OK. Thank you everyone for attending and we'll see you all later. Bye, bye.
Speaker 3
Thank you everyone. See you.
Speaker 2
Thanks guys. Take care.
About AlphaKEK AI
AlphaKEK is an AI lab that seeks to be an indispensable AI infrastructure layer for Web3, helping users better navigate the space and improve their financial decisions. Their B2B solutions currently power Web3 tools and applications using a next‑gen, unbiased AI model and data engine designed for crypto.
Website | Telegram | Discord | CoinGecko | CoinMarketCap