Kuvonchbek Kucharov

Image source: www.alamy.com

In today’s world, artificial intelligence is becoming increasingly important. It has transformed images, videos, and even text through generative Artificial Intelligence (AI). Moreover, AI is used more frequently in healthcare, education, finance, and industrial sectors to enhance productivity and improve decision-making. Leading countries are now relying more on AI algorithms in their daily operations, integrating AI decision-making into everyday tasks. As a result, many businesses are incorporating AI into their strategies.

This is also true for Uzbekistan, where AI technology is emerging in several sectors, including public administration, digital services, and e-commerce. For example, the country is employing AI technologies in the legal sphere with platforms like lex.ai.uz and digitizing public services with AI. Notably, ChatGPT, one of the most famous versions of generative AI, is the most downloaded application in Uzbekistan in 2025.

However, Uzbekistan is lacking in widely implementing AI regulation. Focused policies are needed to tackle AI risks across different industries. Uzbekistan faces unique challenges in regulating AI due to lack of clear vision, lack of clear liability frameworks for AI developers and users, and the need for robust ethical standards. The government is actively working to accelerate AI adoption, with over 20 AI-driven projects launched. For example, Prime Minister Aripov has highlighted key concerns such as legal protection, data transparency, and security, emphasizing the need for coordinated actions to ensure safe AI development.

The need for regulation

There are broad, underlying concerns that technology can provide tremendous benefits to humanity and advance civilization while also presenting new opportunities for misuse. Therefore, as we develop AI systems, they must clearly adhere to human rights and privacy, implementing strong safeguards to mitigate potentially harmful future outcomes. Key human rights concerns include the right to privacy, as AI-driven surveillance systems can enable mass data collection, and the right to non-discrimination, given that biased algorithms can reinforce societal inequalities. Additionally, AI-powered misinformation threatens freedom of expression by distorting public discourse. Unfortunately, some believe excessive regulation could hinder AI development and disadvantage underdeveloped nations in the so-called “AI race.” Developed nations, with their rich digital infrastructure, resources, and advanced data technology, may be less inclined to embrace these regulations for fear of losing their competitive edge.

Nonetheless, failing to implement some form of regulation for AI could have detrimental effects on society in the long run, especially with language models that have shown a tendency to produce biased outputs. One major concern is that AI systems could guide users on how to create biological weapons, potentially leading to catastrophic consequences. A 2022 study by North Carolina-based researchers demonstrated that an AI model, when prompted, could generate blueprints for dangerous biochemical compounds, raising alarms about the misuse of generative AI in bioterrorism. Another issue is that these systems could become sophisticated that they might manipulate users, including independent operators, complicating safety evaluations of such technologies. For instance, social media algorithms have been accused of radicalizing individuals by reinforcing extremist content, as seen in multiple cases linked to online radicalization.

Additionally, the deployment of AI in social roles such as policing carries a significant risk of severe injustices, including wrongful imprisonments or even fatal outcomes. Studies have shown that predictive policing algorithms disproportionately target marginalized communities, reinforcing systemic biases. In the U.S., for example, AI-driven tools used in sentencing have been criticized for unfairly giving higher recidivism scores to Black defendants, furthering racial differences. Therefore, the development and application of AI systems must be thoroughly evaluated. Stricter policies need to be implemented to avert a total societal catastrophe due to unintended effects.

The difficulty of AI governance

Regulating is a challenging endeavor. The history of legislating for digital technologies has largely been unsuccessful. For instance, the European Union’s “Right to Be Forgotten” under the General Data Protection Regulation (GDPR) has faced significant implementation challenges, as global tech companies struggle to balance privacy rights with freedom of information. Similarly, the U.S. has struggled to regulate social media platforms effectively, as seen with the failure of the Stop Online Piracy Act (SOPA) in 2012 due to backlash over free speech concerns. Policymakers must decide what should be regulated (material scope), who should be regulated (personal scope), where the regulation should be applied (territorial scope), and when it should be enforced (temporal scope). Defining the material scope of frontier AI regulations is especially difficult. One challenge involves ensuring that regulations encompass all AI systems that pose significant risks (otherwise, they may be under-inclusive) while avoiding coverage of systems that do not present substantial risks (otherwise, they may be over-inclusive).

AI is often seen as a black box, making it challenging to grasp how it reaches decisions. To open this black box, representatives from academia, government, and civil society must be granted access to promote transparency. This access might empower lawmakers to establish some form of regulation, but it may not be advantageous in the long term. The rapid pace of AI development suggests that any legislation could become obsolete by the time it is implemented. Moreover, the complex nature of AI further complicates Regulation, as AI used in law enforcement requires different specialized policies compared to AI employed for social scoring. Even with well-considered regulations, and provided that all these challenges are effectively addressed, establishing a regulatory framework carries the risk of stifling AI innovation. Also, the difficulty in defining AI results in a lack of standardized rules, which could lead to unintended consequences for society. For instance, the European Union’s General Data Protection Regulation (GDPR), enacted in 2018, aimed to provide comprehensive data protection and privacy for individuals within the EU. Despite its thorough framework, the rapid evolution of technology and data practices have made this regulation out of date.

Similarly, the California Consumer Privacy Act (CCPA) of 2020 sought to give consumers more control over their data, but it too faces challenges in keeping up with technological advancements. These examples illustrate how even well-intentioned regulations can struggle to remain effective in the face of rapid technological change. This shows the complexity of defining and regulating AI, as the technology evolves at an unprecedented pace. Therefore, it is acknowledged that we cannot regulate what we do not understand, making AI an unsuitable for regulation for now. Defining AI poses a significant challenge for technology regulation, as assessing AI’s potential risks and impacts is complex. In summary, the diverse applications and rapid evolution of AI technology make it exceedingly difficult to create a one-size-fits-all regulatory framework, necessitating context-specific policies to address the unique risks and impacts of different AI systems.

The recommended approach for Uzbekistan regarding AI regulation

Uzbekistan should create a principle-based regulatory framework tailored to its socio-economic context to harness AI’s transformative potential while mitigating risks. This approach avoids rigid, prescriptive rules in favor of overarching principles, such as ensuring that AI systems operate safely, securely, and ethically. Japan’s AI strategy, guided by the OECD AI Principles, emphasizes innovation and trustworthiness while respecting human rights and democratic values. The country has established high-level principles for AI development and use, allowing flexibility in how these principles are applied across different sectors. This approach has enabled Japan to harness AI’s benefits while addressing its risks in a context-specific manner. The central concept behind principle-based regulation is that policymakers establish high-level principles rather than mandating specific behaviors. Regulated entities adopt these principles and determine how to implement them. For example, drivers may be required to drive at a reasonable speed rather than adhere to a specific speed limit. A potential principle in AI regulation could be that AI systems should be safe and secure. Compliance is then assessed by evaluating whether the regulated behavior aligns with these principles, which is determined on a case-by-case basis.

To facilitate this, establishing an agency to ensure alignment with high-level principles will promote adherence. Principle-based regulation has also gained traction across various regulatory domains. A typical example is vehicle crash testing, where manufacturers must meet broad safety objectives instead of following specific testing procedures. Principle-based regulation allows industries to innovate and adapt to new challenges without being constrained by specific rules.

Another example is the UK financial services law, where firms must demonstrate that fair customer treatment is integral to their business model. Uzbekistan should develop a specific principle that companies must follow to operationalize this. This would ensure the ongoing development of AI in Uzbekistan without stifling innovation in this field. Existing laws should regulate any potentially harmful uses of AI, particularly concerning liability for wrongful applications of AI.

By learning hard lessons from ineffective and innovation-stifling regulations on cryptocurrency technology, Uzbekistan must now avoid outright bans on AI, which hinder innovation. Instead, it should implement flexible, sector-specific guidelines and invest in digital literacy and cybersecurity infrastructure, positioning the country to reap the benefits of AI-driven productivity gains. Uzbekistan can encourage innovation while prioritizing outcome-focused principles over rigid rules, maintaining public trust, and ensuring that AI is an equitable development tool.

Furthermore, the debate on whether using new legislation or existing laws is on the tables of law makers. While a comprehensive regulation ensures full coverage, implementing existing frameworks is easier. For example, many countries, however, like the U.S. with Colorado’s AI Act, adopt a hybrid approach—introducing AI-specific rules while leveraging existing laws. Uzbekistan should follow suit, integrating AI governance into its broader digital transformation strategy for consistency and efficiency. India, for example, has amended its  data protection laws to include AI provisions that requires transparency in data usage for AI training and deployment. Brazil’s LGPD now requires safeguards for AI-driven data processing, adding clauses to their laws for algorithmic transparency and bias mitigation in data processing. Colorado’s AI Act, a prime example of targeted AI regulation, prohibits algorithmic discrimination in high-risk AI systems (e.g., hiring, housing) by requiring audits and fairness assessments. Consumer protection laws can also be amended to mandate disclosure of AI use and hold developers liable for biased outputs, as seen in Utah’s AI Amendments.

Conclusion

AI inevitably drives societies to evolve because of its rapid growth in areas like automating tasks and AI-assisted decision-making to boost productivity. Frameworks and policies are essential to govern and effectively strategize AI usage in Uzbekistan. However, rigid regulations may be counterproductive due to AI’s inherent complexity. As a remedy to these regulatory challenges, principle-based policies which can be changed as AI change might prove effective in the long term. A potential starting point could be implementing policies requiring AI developers and users to ensure their products and processes are ‘working safely, securely, and ethically.’ Principle-based policies are preferable because they offer flexibility and adaptability, allowing AI systems to evolve while ensuring they operate safely, securely, and ethically. Unlike rigid regulations, these policies can be tailored to specific contexts and updated as technology advances, making them more effective in addressing the dynamic nature of AI. This also means using both old and new laws to regulate AI. It’s also important to teach people about AI so they can understand it and trust it. If Uzbekistan manage to enact good AI rules, AI can be used to improve the country while minimizing its risks. These rules will protect people and help AI grow in Uzbekistan.

Cite as:  Kuvonchbek Kucharov, “Regulating AI in Uzbekistan: Balancing Innovation and Risk”, Uzbekistan Law Blog, 05.05.2025.