Company Memo
Generative AI (Gen AI) marks a significant paradigm shift in software development.
People today spend a significant amount of time on tasks that revolve around gathering and processing information. Whether at work or in daily life, much of our energy is dedicated to collecting information from various sources, organizing it, consuming it, and analyzing it to make informed decisions. This process can be broken down into two key components: the first involves gathering, organizing, and consuming information, while the second involves applying cognitive skills to analyze and act on that information.
Around 80% of effort is spent on the first component—essentially the information extraction, transformation, and loading (ETL) process—leaving only 20% for applying critical thinking to drive decisions and actions. This mirrors the world of software development, where ETL processes often consume the majority of resources before algorithms can be applied to generate insights.
We believe that language models are poised to fundamentally disrupt this workflow, enabling professionals to focus more on high-value work rather than getting bogged down by the initial stages of information processing. While traditional data and information tools have provided some assistance, the emergence of LLM-based AI introduces a key differentiator: the way individuals interact with these systems. Instead of users actively seeking out information, LLMs allow information to come directly to them, effortlessly summoned in natural language. This shift transforms the interaction from a laborious search-and-process activity to an intuitive, conversational engagement with technology.
This profound shift is driven by three key undercurrents emerging that we believe will fundamentally change how solutions are built for enterprises:
- Cost of Compute ≈ 0, Cost of Intelligence ≈ 0:
Inspired by Moore's Law, which predicted the exponential increase in computing power while reducing costs, we are now witnessing a similar trend with with introduction of the language models. Over the past decade, the cost of computing has dramatically decreased. Today, with the advent of language models, the cost of intelligence is rapidly approaching near-zero levels. This shift is making advanced AI capabilities more accessible than ever before. - Beginning of the End of the SaaS Era
While Software as a Service (SaaS) has been the dominant model for software delivery, its grip is starting to loosen. Historically, installation and administration of software were complex and resource-intensive tasks. However, advancements in self-hosting technologies have simplified these processes, making it easier and more cost-effective for enterprises to host and manage their own solutions. Besides, IT departments are eager to regain control, tired of being subservient to Big Tech’s reign over cloud services. - LLM-driven development: An evolving Software Development Paradigms
The history of software development has seen numerous paradigms and shifts, each bringing new methodologies and approaches. Despite these changes, two core objectives have remained constant: improving efficiency and enhancing user experience. LLM-driven development represents the next step in this evolution, offering massive productivity gains in building solutions and changing the way users interact with systems.
Current Concerns & Constraints
Generative AI, despite its transformative potential, is accompanied by several concerns and constraints that need to be addressed:
- Data Privacy and Security
With generative AI systems often relying on vast amounts of data, there are significant concerns regarding data privacy and security. Enterprises are wary of feeding sensitive business data into AI models that may not guarantee data protection, leading to potential breaches and misuse. - Ethical Considerations
The ethical implications of using generative AI are profound. Issues such as bias in AI models, the potential for generating misleading or harmful content, and the overall transparency of AI decision-making processes are critical concerns that must be addressed to ensure responsible AI deployment. - Dependence on Big Tech
Many current generative AI solutions are controlled by a few major tech companies. This oligopoly limits the diversity of AI offerings and forces enterprises to rely on these companies, potentially stifling innovation and increasing vulnerability to corporate agendas and pricing strategies. - Complexity and Accessibility
The implementation and integration of generative AI systems can be complex and resource-intensive. Small and medium-sized enterprises may find it challenging to adopt these technologies without significant investment in infrastructure and expertise. - Regulatory and Compliance Issues
As generative AI technologies evolve, so do the regulatory landscapes governing their use. Companies must navigate a complex web of regulations to ensure compliance, which can be a significant constraint, especially in highly regulated industries like healthcare and finance. - Quality and Reliability of AI Outputs
The quality and reliability of outputs generated by AI models can vary. Ensuring consistent, accurate, and high-quality results is crucial for the practical application of generative AI in business-critical operations. - Alignment of Public Models
Public generative AI models are often aligned according to the perspectives and worldviews of the big tech companies that develop them. This alignment may not always match the values, needs, or cultural contexts of different enterprises and users, potentially leading to misalignment in goals and outcomes. This constraint emphasizes the need for customizable and transparent AI models that can be adapted to fit specific enterprise requirements and ethical standards.
What's Emerging
With the introduction of ChatGPT in 2022, the power of Generative AI was suddenly demonstrated to the general audience. Since then, significant advancements have been made with model improvements and the introduction of various open-source initiatives.
- Rise of Open Source Models:
The adoption of open-source models has seen a tremendous surge, with platforms like Hugging Face paving the way. The release of the Transformers library by Hugging Face facilitated the widespread adoption and integration of open-source models. The real highlight in the open-source domain came from Meta, which, under somewhat controversial circumstances, released the first open-source model. This move was followed by other major players like Mistral and Google, who have also open-sourced their models. - Skyrocketing Adoption
The introduction and adoption of open-source models of all sizes have skyrocketed. These models are now more accessible and customizable, enabling enterprises to leverage AI technology without the constraints of proprietary solutions. - Ease of Fine-Tuning and Inference
Many open-source projects have simplified the processes of fine-tuning and inference. Projects like Ollama have made it possible to deploy small models on personal laptops or desktops. This ease of deployment and customization is further enhanced by various tools that simplify LLM operations (LLM ops), making it easier to manage and optimize AI models. - Focus on Domain-Specific Models
The emergence of open-source models and tools has shifted the focus towards taking foundational models and tailoring them for specific domains and use cases. This approach not only enhances efficiency and effectiveness but also makes the models more cost-effective and relevant to specific business needs. - Increased Adoption of Owned Models
Due to data privacy concerns and the increasing feasibility of pre-training, fine-tuning, and deploying models for production use, the adoption of owned models is on the rise. Enterprises are increasingly opting to develop and deploy their models to ensure data privacy, customization, and control over their AI applications.
These emerging trends indicate a significant shift towards more accessible, customizable, and domain-specific AI solutions. The landscape of AI development is becoming more democratized, allowing a broader range of enterprises to harness the power of generative AI.
Disruptions Happening Currently
The rapid advancements in generative AI and the proliferation of open-source models are causing significant disruptions across various industries. These disruptions are transforming the way businesses operate, innovate, and interact with technology.
- Democratization of AI Technology
The availability of powerful open-source AI models is democratizing access to advanced AI capabilities. Small and medium-sized enterprises (SMEs) can now leverage state-of-the-art AI without the need for massive investments in proprietary technologies. This democratization is leveling the playing field and fostering innovation across diverse sectors. - Shift from Proprietary to Open-Source Models
There is a noticeable shift from reliance on proprietary AI models controlled by big tech companies to open-source alternatives. This shift is empowering organizations to customize AI solutions to fit their unique needs and reducing dependency on a few dominant players in the AI space. - Enhanced Customization and Flexibility
Open-source models and tools have made it easier to fine-tune and adapt AI models for specific applications. This enhanced customization and flexibility allow businesses to create highly tailored solutions that better meet their operational requirements and customer needs. - Improved AI Operations (LLM Ops)
The emergence of tools and frameworks that simplify the management, deployment, and optimization of large language models (LLMs) is revolutionizing AI operations (LLM ops). These advancements are reducing the complexity and cost of maintaining AI systems, making them more accessible and manageable for businesses of all sizes. - Increased Focus on Data Privacy and Security
As organizations become more aware of data privacy and security issues, there is a growing trend towards developing and deploying owned AI models. This shift helps businesses maintain greater control over their data and ensures compliance with regulatory requirements. - Beginning of the End of the SaaS Era
The dominance of Software as a Service (SaaS) is being challenged. Historically, SaaS was favored for its simplicity in installation and administration. However, advancements in self-hosting technologies are making it easier and more cost-effective for enterprises to host and manage their own solutions. Besides, IT departments are eager to regain control, tired of being subservient to Big Tech’s reign over cloud services. This shift is leading to a resurgence in on-premise solutions and hybrid models that combine the best of both worlds. - Acceleration of AI-Driven Innovation
The disruptions caused by generative AI are accelerating innovation across various industries. Companies are exploring new ways to integrate AI into their products and services, leading to the development of novel solutions and business models. - Transformation of User Interactions
Generative AI is fundamentally changing the way users interact with software systems. AI-driven interfaces are becoming more intuitive, responsive, and capable of understanding complex queries and providing personalized responses. This transformation is enhancing user experiences and setting new standards for interaction quality. - Economic and Workforce Impacts
The widespread adoption of generative AI is having significant economic and workforce impacts. While AI is driving productivity gains and creating new opportunities, it is also leading to changes in job roles and the skills required in the workforce. Businesses must navigate these changes to ensure a smooth transition and harness the full potential of AI.
How We Are Going to Be Part of This New Emerging Paradigm
As we navigate this transformative era of generative AI, we are committed to positioning ourselves at the forefront of this new paradigm. Our strategic initiatives will focus on helping enterprises leverage generative AI while addressing key concerns such as data privacy, cost efficiency, and operational innovation.
- Privacy-First Full Stack Implementation and Deployment Services
We will assist enterprises in adopting generative AI within their organizations with a strong emphasis on privacy. Our services will cover the entire spectrum from training and fine-tuning models for specific use cases to deploying these models and building applications on top of them. By ensuring data privacy and security, we aim to foster trust and confidence in AI adoption. - Cost-Effective Development to Disrupt Existing Solutions
As the cost of intelligence continues to decline with advancements in LLM capabilities, we will harness these efficiencies to improve our development workflows. By building and deploying AI solutions quickly and cost-effectively, we aim to disrupt existing market solutions. Our offerings will include one-time cost models that allow customers to own their solutions outright, freeing them from recurring SaaS subscriptions and addressing data privacy concerns. - LLM-Driven Development Approach to Business Operational Solutions
We will introduce a fresh, LLM-driven approach to existing business operational solutions such as CRM, ERP, and e-commerce systems. By integrating generative AI into these platforms, we will enhance their capabilities, making them more intuitive, efficient, and responsive to user needs. - Developing Small Language Models
In addition to leveraging existing open-source models, we will create our own stack of small language models tailored for very specific use cases, domains, and operational tasks. These bespoke models will be optimized for performance and accuracy in their respective areas, providing our clients with highly specialized and effective AI solutions.
By focusing on these strategic initiatives, we aim to not only lead in the generative AI space but also empower enterprises to harness the full potential of AI, drive innovation, and achieve their business objectives with greater efficiency and effectiveness.
Our Motivation
Our motivation to embrace and lead the Gen AI First Development stems from several core beliefs and objectives:
- Saving Enterprise Resources on Mundane Tasks
We aim to help enterprises redirect their resources from mundane, repetitive tasks to more strategic and creative work. By automating routine processes with AI, we make businesses more efficient and focused on high-value activities. - Enhancing Data Privacy and Security
In an era where data privacy and security are paramount, we are committed to providing solutions that prioritize these concerns. By developing and deploying privacy-first AI models, we aim to build trust and ensure that our clients can leverage AI without compromising their sensitive data. - Reducing Dependency on Big Tech
The current landscape is heavily dominated by a few major tech companies. We want to disrupt this oligopoly by offering open-source and proprietary AI solutions that provide enterprises with greater control and customization. Our goal is to democratize AI technology and reduce dependency on big tech. - Cost Efficiency and Accessibility
We recognize the need for cost-effective AI solutions that are accessible to businesses of all sizes. By leveraging the declining cost of intelligence, we aim to provide affordable AI solutions that deliver high value without recurring costs, making advanced AI technology accessible to a broader range of enterprises. - Driving Operational Excellence
Our goal is to revolutionize existing business operational solutions such as CRM, ERP, and e-commerce systems with LLM-driven approaches. By enhancing these platforms with generative AI, we aim to improve their functionality, efficiency, and user experience, driving operational excellence for our clients.
By focusing on these objectives, we aim to lead the generative AI revolution with a clear vision and purpose. Our commitment to innovation, privacy, cost efficiency, and ethical AI will drive our efforts to make a meaningful difference in the world of technology and beyond.
To Conclude,
The adoption of digital technology has been transforming industries for decades. However, we are now witnessing a significant shift where humans spend most of their time and energy on information augmentation. Technically speaking, 80-85% of their time is consumed in the ETL (extract, transform, and load) process, with the remaining time spent applying intelligence to the data to make decisions or take actions.
Generative AI is set to disrupt this paradigm by automating up to 95% of this process. As models become more advanced, intelligent, and accessible, they will handle the extraction, transformation, and initial analysis of data, allowing humans to focus on creative and impactful work. Our goal is to free up human time and energy from mundane tasks, enabling more innovative and meaningful contributions at lower time and cost, while maintaining a strong emphasis on data privacy.
Reach out to us to discover our roadmap.