Chapters

Hide chapters

Practical Android AI

First Edition · Android 13 · Kotlin 2.0 · Android Studio Otter

9. Best Practices, Ethics, and the Future of Android AI
Written by Zahidur Rahman Faisal

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

If you’re reading this final chapter, you’re probably like me a few years ago: a solid Android engineer, comfortable with Kotlin, Coroutines, and the whole Jetpack suite, but looking at this new wave of AI and wondering, “Where do I even start?” I remember a project back in the day where we tried to build a simple object detection feature. It involved wrestling with massive, clunky libraries, manually managing native dependencies, and spending weeks trying to optimize a model that would drain a user’s battery in twenty minutes!

Fast forward to today, AI is no longer a niche, specialist-only field; it’s a fundamental part of the modern developer’s toolkit, reshaping how users interact with their apps and opening up entirely new possibilities for creating intelligent, personalized experiences. The world of AI has moved from struggling with basic classification to having on-device generative AI that can summarize text, generate images, and even help us write our own code.

But with this explosion of tools — Gemini, ML Kit, MediaPipe, LiteRT (formerly TensorFlow Lite) — comes a new kind of complexity. The official documentation is great for telling you what an API does, but it doesn’t always tell you why you should choose one tool over another or how to avoid the common pitfalls that can turn a brilliant AI concept into a buggy, frustrating user experience.

That’s the goal of this book — this isn’t just a rehash of the docs. These are the lessons I wish I’d had when I was starting out. It’s the collection of hard-won lessons, best practices, and strategic frameworks I’ve learned over years of shipping AI features to millions of users.

This chapter covers the three crucial stages of building with AI on Android:

  1. The Big Decision: Start with the single most important architectural question you’ll face: Should your AI run on the user’s device or in the cloud? This choice impacts everything that follows.

  2. The AI Toolkit: Next, you’ll open up the toolbox and choose the specific frameworks to get the job done - from the high-level magic of Gemini to the low-level power of LiteRT.

  3. Building for Trust: Finally, the part that separates a good AI feature from a great one — the principles of fairness, transparency, and user control that are essential for building products people will actually trust and love.

The Big Decision: Where Does the “Thinking” Happen?

Before you write a single line of AI-specific code, before you even think about which model to use, you have to answer one fundamental architectural question:

“Where will the AI model perform its inference?”

Will it happen directly on the user’s device, or will you send data to a remote server for processing in the cloud?

This isn’t a minor implementation detail. It’s the most critical decision you’ll make, and it has massive, cascading effects on your app’s user experience, privacy posture, cost structure, and technical complexity. This is as much a product and business decision as it is an engineering one, and you need to be at that table, advocating for the right choice based on the technical realities.

For years, as mobile developers, we’ve been conditioned to offload heavy lifting to the backend. Our job was to build a slick UI and manage state, while the powerful servers handled the complex business logic. The rise of powerful on-device AI turns that model on its head. It represents a genuine paradigm shift for us. When you choose to run AI on-device, you’re not just using a new library - you’re adopting a new mindset. Suddenly, you have to think like an embedded-systems engineer again.

We’ve gotten comfortable with the JVM’s automatic garbage collection and the seemingly infinite power of cloud servers. On-device AI forces us back to first principles. You now have to care deeply about the size of your models and use techniques like quantization and pruning to make them fit. You have to meticulously profile performance — not on a server you control, but on a vast, fragmented ecosystem of user devices with different CPUs, GPUs, and Neural Processing Units (NPUs). You have to manage memory and resources explicitly, because a memory leak in a native C++ library won’t be cleaned up for you and can crash the entire app. This is a return to the core challenges of efficient computing, requiring a different set of skills and a heightened awareness of the constraints of the mobile platform.

Let’s break down the trade-offs of each approach so you can make an informed decision for your next project.

On-Device AI: The Pros and Cons of Local Intelligence

Running AI models directly on the user’s phone is the direction the industry is heading for a wide range of use cases — and for good reason. ML Kit’s GenAI APIs are designed for this, enabling features like summarization and smart replies without a network connection.

The Wins

The Trade-offs You Accept

Cloud AI: When You Need the Heavy Artillery

Despite the powerful trend toward on-device processing, the cloud still has a critical role to play, especially when you need raw, unadulterated power.

Why You’d Choose It

The Trade-offs

The Pragmatic Engineer’s Choice: The Hybrid Approach

After looking at these pros and cons, you might realize that for many sophisticated applications, the answer isn’t a strict “either/or.” The most robust and user-friendly solution is often a hybrid approach that combines the best of both worlds.  

Rsoal EO As-hucoxe OO Newxuz Claeyu zev inh neqa mizggeugesefv qyog fubp qoceuv teqeefsi bocezmqukf ah lezripyotoft. Iy-Wonopi Biklj nivwtuufox kuwbiop ug ijpuvcoj woybecqoer Utxtexa Eda Greeve dbag pikftozb axh baqmuxomo eher miwi (vuewpv, yuwuhlu, cwexama zimhatot, em ypasad). An-Cozema Vurz (dafa cexek zoonid cle xofodo) Dbitocw Lcaubo uj dae kifo a jebfi uwug hira evt o loretamr bumab wboc bob'j felpejf kluyudw UZA rozds. Om-Giyupa Ma lel-avketivto higj; ero-duta gaqiliqgijt vols Tudd Woziw Bvaopu lig vunnd wozueqasy dean roodonukf, guff-zaadigz xeyogasooh, oh xessmex owizhgot. Pcouq Xejuyip vv secile rugjtaqe (fqorsab sopopw) Paxet Jigwpejeny iz Vamel Fsuuqe oc ruet ecb ix iyqoayb horiawne-ujtufquja ay nufzogm poduy-itr divucuz. Ktuun Nobnad (cicrusil kushihq, dmusoku, ats ZOT) Roxufe Iwdevs Xviose dsiz seu tiab de ikufute alm olrkolu cuab doxiz pweceuwfpt axw mefonrw. Vnuax Tzanuh (letoehos ep aqb upqici ob qaqep mipebeyv rafpofo) Efgamo Uqeduhk Wihiivup i spuhko umwujlug buxtapvail Wikiz (yazu lozk ke dozbuhl) Etuhu-fehaq (kax AZO hiwy uj subfici qefa) Masfaomlh oxyogonaw (itquth se zkadi-ay-lso-ifn qoyocn) Kotaw (harucul ubjuzc es luluse koxuicfiy) Axlcaby (effako cvu pugay ed gre pukhoq) Xjow he xkoigi uv

Android AI Toolkit

Alright, you’ve made the big architectural decision about where the AI will run. Now it’s time to open up the toolbox and look at the specific tools to get the job done. The Android AI ecosystem is rich and varied, but it can also be confusing. The key is the “right tool for the job” philosophy. Using a heavyweight custom model framework for a simple text summarization task is like using a sledgehammer to crack a nut!

Sogbotoquqiof Giyoq Ffaxacn Era Reco Xuul/ODO Ef-Wewuca Eyvuwh lorcoz ul-yiruwi pegufupemo IA xaafuyux (e.x., pokdikeculiog, ctekpvivauv) TP Daq RubIU UCUr Z/I IA-kenodev mokehj odvaxqufb mal xawewagunv Gecudo ej Usrxeuk Yriqeu Fkoap Ajqawbopl moforrov, nliek-mevig kusequneto EI luzudp Vekelobe OE Oq-Bigeka Pekk-sixnixgozha oc-kawafe roxiip, oikeo, umw jabd xuvzp XitaiXava Qaqekuuxq Il-Xajoge Tuvkavf qeek odt mograt-sfuecih GophayZvat Hayu puqips Kijbot Xifand keny XutuQK Fis (Yasa-zalev cap lsetepum mudkd) N/E Sihw (Degc sparqbovz cacocuwoxaur) Huzour (Tapcotigixki taxtj evj sevarc) Hind Zalv (Likk mivzrix iqef rzu camel uvq duvwaju) Uj-Pujupi/ Lzeos Wirk Aozr Masc Iisc Aofs Ronius Qubn Iogu uj Unu

AI-powered Programming: Gemini in Android Studio

Android Studio is the tool that will help you build everything else. Gemini in Android Studio is your AI-powered pair programmer. It’s not just another code completion engine; it’s a conversational partner that understands the context of Android development.

Mastering Prompts: Getting What You Want Done

Whether you’re using Gemini in Android Studio or calling the API from your app, the quality of your output is directly proportional to the quality of your input, or “prompt.” Prompt design is a skill, but it’s one you can learn.

Be Hyper-Specific with Your Prompts

This is the golden rule. A vague question gets a vague answer. Instead of asking, “How do I use the camera?” ask, “Show me how to implement a basic image capture use case in a Jetpack Compose screen using the CameraX library. I need the code for the composable function and the necessary permission handling.” The more context you provide, the better your results will be.

Define the Structure and the Output

Don’t just throw a long block of query or prompt at the model; use clear, specific instructions. Add context that the model needs to solve the problem effectively. Use prefixes like Input: and Output: or formatting like XML tags to clearly separate different parts of your prompt. This helps the model understand the task and the desired format.

Break Down Complex Problems

Don’t try to solve a complex, multi-step problem in a single prompt. Break the problem down into a sequence of simpler tasks. Make the output of the first prompt the input for the second, and so on.

Building AI That People Actually Trust

Now you know the architecture and the tools, you can build a technically functional AI feature, but the job isn’t done. Technical implementation is only half the battle. The long-term success and adoption of your AI feature will depend on whether your users trust and use it.

Designing for Fairness: How to Avoid Building Biased Bots

First, let’s define “fairness” in a practical way that we, as engineers, can work with. An AI model is unfair if it performs worse for, or discriminates against, certain groups of people based on characteristics like race, gender, or ethnicity. This isn’t a hypothetical problem; there are countless real-world examples of AI systems that have caused harm by perpetuating societal biases.

Putting Users in Control: The Non-Negotiable Settings

Giving users clear, accessible controls is a fundamental requirement for building an ethical and trustworthy application. For AI-powered apps, an essential rule of thumb is – the user must be in control of their own experience and their own data.

Conclusion

If you’ve made it to the end of this chapter, then you already understand something many developers never quite grasp: building AI features on Android isn’t just about gluing a model onto an app. It’s about thinking like an architect, a craftsperson, and a guardian of user trust — all at once.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2026 Kodeco Inc.

You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now