Dialogflow Mobile Development Integration: Android and Flutter Complete Tutorial

Dialogflow Mobile Development Integration: Android and Flutter Complete Tutorial
Want to add AI conversation features to your App?
Whether it's a customer service assistant, voice assistant, or interactive guide, Dialogflow can help you implement it. This article teaches you how to integrate Dialogflow into Android and Flutter to build mobile applications with natural language understanding capabilities.
If you're not yet familiar with Dialogflow, we recommend first reading the Dialogflow Complete Guide.
Mobile App Integration Method Comparison
There are two main ways to integrate Dialogflow into an App, each with pros and cons.
Direct API Call
App directly calls Dialogflow API:
App → Dialogflow API → Response
Advantages:
- Simple architecture
- Lower latency
- No need to build backend
Disadvantages:
- API key exposed in App (security risk)
- Cannot add additional business logic
- Difficult to record conversation history
Backend Proxy
App calls Dialogflow through your backend:
App → Your Backend → Dialogflow API → Your Backend → App
Advantages:
- API key safely stored in backend
- Can add business logic (validation, logging, filtering)
- Easy to integrate with other systems
Disadvantages:
- Need to build and maintain backend
- Slightly higher latency
- More complex architecture
Analysis and Selection Recommendations
| Method | Suitable Scenarios |
|---|---|
| Direct API | POC validation, learning purposes, internal tools |
| Backend Proxy | Production products, security needs, enterprise applications |
Recommendation: Production Apps should always use the backend proxy method.
Android Studio Integration
Project Setup
Step 1: Add Dependencies
In app/build.gradle add:
dependencies {
implementation 'com.google.cloud:google-cloud-dialogflow:4.0.0'
implementation 'io.grpc:grpc-okhttp:1.56.1'
implementation 'com.google.auth:google-auth-library-oauth2-http:1.19.0'
}
Step 2: Configure Network Permissions
In AndroidManifest.xml add:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
Gradle Dependencies
Complete build.gradle configuration:
android {
compileSdk 34
defaultConfig {
minSdk 24
targetSdk 34
}
packagingOptions {
exclude 'META-INF/INDEX.LIST'
exclude 'META-INF/DEPENDENCIES'
}
}
dependencies {
implementation 'com.google.cloud:google-cloud-dialogflow:4.0.0'
implementation 'io.grpc:grpc-okhttp:1.56.1'
implementation 'io.grpc:grpc-stub:1.56.1'
}
Service Account Setup
Step 1: Get Key File
- Go to Google Cloud Console > IAM > Service Accounts
- Create or select service account
- Download JSON key file
Step 2: Place Key (Development Use)
Place key file at app/src/main/res/raw/credentials.json
Note: This is only suitable for development testing. Use backend proxy for production.
DetectIntent API Call
Create Dialogflow client class:
class DialogflowClient(context: Context) {
private val sessionsClient: SessionsClient
private val session: SessionName
private val projectId = "your-project-id"
private val sessionId = UUID.randomUUID().toString()
init {
// Load credentials
val stream = context.resources.openRawResource(R.raw.credentials)
val credentials = GoogleCredentials.fromStream(stream)
.createScoped(listOf("https://www.googleapis.com/auth/cloud-platform"))
val settings = SessionsSettings.newBuilder()
.setCredentialsProvider { credentials }
.build()
sessionsClient = SessionsClient.create(settings)
session = SessionName.of(projectId, sessionId)
}
suspend fun detectIntent(text: String): String {
return withContext(Dispatchers.IO) {
val textInput = TextInput.newBuilder()
.setText(text)
.setLanguageCode("en-US")
.build()
val queryInput = QueryInput.newBuilder()
.setText(textInput)
.build()
val request = DetectIntentRequest.newBuilder()
.setSession(session.toString())
.setQueryInput(queryInput)
.build()
val response = sessionsClient.detectIntent(request)
response.queryResult.fulfillmentText
}
}
fun close() {
sessionsClient.close()
}
}
Example Code
Complete Activity example:
class ChatActivity : AppCompatActivity() {
private lateinit var dialogflowClient: DialogflowClient
private lateinit var messageAdapter: MessageAdapter
private val messages = mutableListOf<Message>()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_chat)
dialogflowClient = DialogflowClient(this)
messageAdapter = MessageAdapter(messages)
recyclerView.adapter = messageAdapter
sendButton.setOnClickListener {
val text = inputEditText.text.toString()
if (text.isNotEmpty()) {
sendMessage(text)
inputEditText.text.clear()
}
}
}
private fun sendMessage(text: String) {
// Display user message
messages.add(Message(text, isUser = true))
messageAdapter.notifyItemInserted(messages.size - 1)
// Call Dialogflow
lifecycleScope.launch {
try {
val response = dialogflowClient.detectIntent(text)
messages.add(Message(response, isUser = false))
messageAdapter.notifyItemInserted(messages.size - 1)
recyclerView.scrollToPosition(messages.size - 1)
} catch (e: Exception) {
messages.add(Message("Sorry, an error occurred", isUser = false))
messageAdapter.notifyItemInserted(messages.size - 1)
}
}
}
override fun onDestroy() {
super.onDestroy()
dialogflowClient.close()
}
}
Flutter Integration
Package Selection
Flutter has several Dialogflow-related packages:
| Package | Description | Maintenance Status |
|---|---|---|
dialogflow_grpc | Official gRPC protocol | Active |
flutter_dialogflow | Community package | Less Updated |
dialogflow_flutter | Simplified version | Less Updated |
Recommendation: Use dialogflow_grpc or directly use HTTP API.
Cross-Platform Implementation
Step 1: Add Dependencies
In pubspec.yaml:
dependencies:
flutter:
sdk: flutter
http: ^1.1.0
uuid: ^4.0.0
Step 2: Create API Service
import 'dart:convert';
import 'package:http/http.dart' as http;
import 'package:uuid/uuid.dart';
class DialogflowService {
final String projectId;
final String accessToken; // Get from backend
final String sessionId = Uuid().v4();
DialogflowService({required this.projectId, required this.accessToken});
Future<String> detectIntent(String text) async {
final url = Uri.parse(
'https://dialogflow.googleapis.com/v2/projects/$projectId/agent/sessions/$sessionId:detectIntent'
);
final response = await http.post(
url,
headers: {
'Authorization': 'Bearer $accessToken',
'Content-Type': 'application/json',
},
body: jsonEncode({
'queryInput': {
'text': {
'text': text,
'languageCode': 'en-US',
},
},
}),
);
if (response.statusCode == 200) {
final data = jsonDecode(response.body);
return data['queryResult']['fulfillmentText'];
} else {
throw Exception('Dialogflow API error: ${response.statusCode}');
}
}
}
Example Widget
class ChatScreen extends StatefulWidget {
@override
_ChatScreenState createState() => _ChatScreenState();
}
class _ChatScreenState extends State<ChatScreen> {
final TextEditingController _controller = TextEditingController();
final List<ChatMessage> _messages = [];
late DialogflowService _dialogflow;
bool _isLoading = false;
@override
void initState() {
super.initState();
_dialogflow = DialogflowService(
projectId: 'your-project-id',
accessToken: 'your-access-token', // Should actually get from backend
);
}
void _sendMessage() async {
final text = _controller.text.trim();
if (text.isEmpty) return;
setState(() {
_messages.add(ChatMessage(text: text, isUser: true));
_isLoading = true;
});
_controller.clear();
try {
final response = await _dialogflow.detectIntent(text);
setState(() {
_messages.add(ChatMessage(text: response, isUser: false));
});
} catch (e) {
setState(() {
_messages.add(ChatMessage(text: 'Sorry, an error occurred', isUser: false));
});
} finally {
setState(() => _isLoading = false);
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('AI Assistant')),
body: Column(
children: [
Expanded(
child: ListView.builder(
itemCount: _messages.length,
itemBuilder: (context, index) {
final message = _messages[index];
return ChatBubble(
text: message.text,
isUser: message.isUser,
);
},
),
),
if (_isLoading) LinearProgressIndicator(),
_buildInputArea(),
],
),
);
}
Widget _buildInputArea() {
return Container(
padding: EdgeInsets.all(8),
child: Row(
children: [
Expanded(
child: TextField(
controller: _controller,
decoration: InputDecoration(hintText: 'Enter message...'),
onSubmitted: (_) => _sendMessage(),
),
),
IconButton(
icon: Icon(Icons.send),
onPressed: _sendMessage,
),
],
),
);
}
}
State Management Integration
Use Provider to manage conversation state:
class ChatProvider extends ChangeNotifier {
final List<ChatMessage> _messages = [];
bool _isLoading = false;
List<ChatMessage> get messages => _messages;
bool get isLoading => _isLoading;
Future<void> sendMessage(String text) async {
_messages.add(ChatMessage(text: text, isUser: true));
_isLoading = true;
notifyListeners();
try {
final response = await _dialogflow.detectIntent(text);
_messages.add(ChatMessage(text: response, isUser: false));
} catch (e) {
_messages.add(ChatMessage(text: 'An error occurred', isUser: false));
} finally {
_isLoading = false;
notifyListeners();
}
}
}
Voice Assistant Features
Let App support voice input and output.
Speech-to-Text Integration
Android (using SpeechRecognizer):
class VoiceInputManager(private val context: Context) {
private var speechRecognizer: SpeechRecognizer? = null
private var onResult: ((String) -> Unit)? = null
fun startListening(onResult: (String) -> Unit) {
this.onResult = onResult
speechRecognizer = SpeechRecognizer.createSpeechRecognizer(context)
speechRecognizer?.setRecognitionListener(object : RecognitionListener {
override fun onResults(results: Bundle?) {
val matches = results?.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION)
matches?.firstOrNull()?.let { onResult(it) }
}
override fun onError(error: Int) {
// Handle error
}
// Other required override methods...
})
val intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH).apply {
putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM)
putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en-US")
}
speechRecognizer?.startListening(intent)
}
fun stopListening() {
speechRecognizer?.stopListening()
speechRecognizer?.destroy()
}
}
Flutter (using speech_to_text):
import 'package:speech_to_text/speech_to_text.dart' as stt;
class VoiceInput {
final stt.SpeechToText _speech = stt.SpeechToText();
bool _isListening = false;
Future<void> initialize() async {
await _speech.initialize();
}
void startListening(Function(String) onResult) {
_speech.listen(
onResult: (result) {
if (result.finalResult) {
onResult(result.recognizedWords);
}
},
localeId: 'en_US',
);
_isListening = true;
}
void stopListening() {
_speech.stop();
_isListening = false;
}
}
Text-to-Speech Playback
import 'package:flutter_tts/flutter_tts.dart';
class VoiceOutput {
final FlutterTts _tts = FlutterTts();
Future<void> initialize() async {
await _tts.setLanguage('en-US');
await _tts.setSpeechRate(0.5);
}
Future<void> speak(String text) async {
await _tts.speak(text);
}
Future<void> stop() async {
await _tts.stop();
}
}
Continuous Dialogue Mode
Implement Siri-like continuous dialogue experience:
class ContinuousDialogue {
final VoiceInput _voiceInput;
final VoiceOutput _voiceOutput;
final DialogflowService _dialogflow;
bool _isActive = false;
void startConversation() async {
_isActive = true;
while (_isActive) {
// Voice input
final userText = await _listenForInput();
if (userText == 'end conversation') {
_isActive = false;
break;
}
// Call Dialogflow
final response = await _dialogflow.detectIntent(userText);
// Voice output
await _voiceOutput.speak(response);
// Brief wait before continuing to listen
await Future.delayed(Duration(milliseconds: 500));
}
}
}
Security Considerations
Mobile App security is especially important—once API keys leak, they can't be retrieved.
API Key Protection Strategies
Never do this:
// ❌ Hard-coding key in code
val apiKey = "AIzaSyXXXXXXXXXXXXXXX"
Recommended approaches:
1. Use Backend Proxy (Recommended)
App → Your Backend (authentication + Dialogflow call) → App
Backend stores API key, App only communicates with backend.
2. Use Firebase Remote Config
val remoteConfig = Firebase.remoteConfig
remoteConfig.fetchAndActivate().addOnCompleteListener { task ->
if (task.isSuccessful) {
val apiKey = remoteConfig.getString("dialogflow_api_key")
}
}
3. Use Android Keystore
val keyStore = KeyStore.getInstance("AndroidKeyStore")
keyStore.load(null)
// Securely store and retrieve keys
User Authentication Integration
Ensure only logged-in users can use AI features:
class SecureDialogflowClient(private val authService: AuthService) {
suspend fun detectIntent(text: String): String {
// Confirm user is logged in
val user = authService.currentUser
?: throw UnauthorizedException("Please log in first")
// Get access token
val token = authService.getIdToken()
// Call your backend (not directly calling Dialogflow)
return apiService.chat(
token = token,
text = text
)
}
}
Sensitive Data Handling
Don't log sensitive conversations:
// ❌ Logging full conversation
Log.d("Chat", "User said: $userInput")
// ✓ Only log necessary information
Log.d("Chat", "User sent message, length: ${userInput.length}")
Clear local conversation history:
fun clearChatHistory() {
// Clear memory
messages.clear()
// Clear local storage
sharedPreferences.edit().remove("chat_history").apply()
}
Worried about API key security? Mobile App security issues are easily overlooked—once something happens, it's very troublesome. Book architecture consultation to have us help design secure integration architecture.
Release Considerations
App Store / Play Store Review
Privacy Policy Requirements:
If App collects conversation content, need to explain in privacy policy:
- What data is collected
- How data is used
- How long data is retained
- How users can delete data
Permission Explanations:
If using microphone permission, need to explain why:
- "Used for voice input, allowing you to speak with the AI assistant"
Performance Optimization
Reduce Startup Time:
// Lazy load Dialogflow client
private val dialogflowClient by lazy {
DialogflowClient(applicationContext)
}
Background Processing:
// Process in background thread
viewModelScope.launch(Dispatchers.IO) {
val response = dialogflowClient.detectIntent(text)
withContext(Dispatchers.Main) {
updateUI(response)
}
}
Offline Handling
Handling when there's no network:
suspend fun detectIntent(text: String): String {
if (!isNetworkAvailable()) {
return "Currently no network connection, please try again later."
}
return try {
dialogflowClient.detectIntent(text)
} catch (e: IOException) {
"Network connection unstable, please try again later."
}
}
For more API integration details, refer to Dialogflow Fulfillment and API Integration Tutorial. For Intent design, refer to Dialogflow Intent and Context Tutorial. For cost estimation, refer to Dialogflow Pricing Complete Analysis.
FAQ
Q1: Will mobile apps directly calling Dialogflow API expose service account keys?
Yes, the biggest security risk. Mobile apps are untrusted environments — users can reverse-engineer APK/IPA to extract secrets baked into the app. Wrong approaches (common): (1) putting service account JSON key in APK assets; (2) hardcoding API key in Kotlin/Swift; (3) storing service account key in Firebase for client reading. These will leak 100% — just a matter of time. Correct approaches: (1) Backend Proxy architecture — app → your backend → Dialogflow API. Backend holds the service account key; app only communicates with your backend using user auth (Firebase Auth, JWT tokens); (2) Short-lived token mechanism — backend generates short-lived tokens (5–15 minute expiry) for app to call Dialogflow directly; (3) Workload Identity Federation (2025 recommended) — if in GCP ecosystem, WIF lets app exchange Firebase Auth tokens for GCP credentials directly. Additional protection: (1) rate limiting (60 calls/min per user); (2) input validation (prevent prompt injection); (3) request signing (prevent man-in-the-middle tampering); (4) moderation before sending to Dialogflow (OpenAI Moderation API or Perspective API).
Q2: What's the difference between iOS and Android Dialogflow SDK? Which is easier?
Actually recommend wrapping REST API yourself on both; official SDKs aren't well-maintained. (1) Official SDK status — (A) Google's Android SDK is deprecated, long unupdated; (B) iOS has no official SDK; (C) official recommendation is REST API. (2) iOS implementation (Swift) — URLSession or Alamofire paired with Auth0/Firebase Auth. Example: let url = URL(string: "https://your-backend.com/api/chat")!; URLSession.shared.dataTask(with: request) { ... }. Pro: iOS networking is simple; Challenge: voice input integration (AVAudioEngine + Speech framework). (3) Android implementation (Kotlin) — Retrofit + OkHttp paired with Firebase Auth. Example: @POST("/api/chat") suspend fun chat(@Body request: ChatRequest): ChatResponse. Pro: Retrofit ecosystem is complete; Challenge: voice requires RecognitionListener + Android Speech or Google Cloud Speech-to-Text. (4) Cross-platform — (A) React Native — react-native-voice + axios; (B) Flutter — speech_to_text + http package; (C) Both packages use Dart/JS for shared logic, saving 50%+ time over native. Implementation priority: (1) MVP stage — cross-platform framework (Flutter or RN) for iOS + Android in one go; (2) Mature product — native iOS Swift + Android Kotlin for optimal performance; (3) Enterprise internal app — native with MDM (Mobile Device Management) for security requirements.
Q3: What should we watch for in-app voice conversation integration? How to improve poor latency?
Total latency should be <2 seconds for good UX. Latency breakdown: (1) Speech-to-Text — 40–60% of latency, typically 800–1500ms. Optimize: (A) use streaming STT (Google Cloud Streaming API) for transcription while speaking; (B) use Android Speech or iOS Speech framework locally (faster but lower quality). (2) Dialogflow intent recognition — 10–20% of latency, typically 200–500ms. Besides switching to Dialogflow CX (similar speed), not much optimization possible. (3) Webhook processing (if fulfillment) — 10–30%, depends on webhook complexity. Optimize: (A) Cloud Run with min instances (avoid cold start); (B) add cache; (C) async non-critical logic. (4) Text-to-Speech — 10–30%, typically 300–800ms. Optimize: (A) streaming TTS for playback during generation; (B) pre-render common responses as MP3. (5) Network — 5–15%, depends on user network. Optimization summary: (A) Streaming is key — give users "transcribing..." visual feedback; (B) Reduce round trips — don't call multiple APIs back and forth; (C) Preload — warm up connections on app open; (D) Fallback UX — show "please wait" if over 3 seconds. Real numbers: Google Assistant averages 1.2 seconds latency — that's the target.
Q4: Will App Store / Google Play reject apps for AI conversation features? What compliance to watch for?
Yes, reviews are increasingly strict. Common rejection reasons: (1) Content moderation — (A) Apple requires moderation on app-generated content (no AI-generated adult, violent, or hateful content); (B) Google Play similar requirement; (C) Fix: moderate output through moderation API (OpenAI Moderation, Perspective API). (2) User data protection — (A) Apple requires detailed App Privacy declarations of data collected; (B) GDPR / CCPA require opt-in, deletion; (C) Fix: clear privacy policy, in-app data viewing/deletion. (3) Age restrictions — (A) if app is available to minors (under 12+), strict AI conversation limits apply; (B) requires parental consent mechanism; (C) Fix: provide "restricted mode" or set 13+ rating. (4) Feature description match — (A) if app claims "AI customer service" but functionality is basic, may be rejected; (B) actual features must match App Store description. (5) Medical/Financial/Legal advice — (A) AI giving medical, financial, legal advice has strict limits; (B) Required: add disclaimers, clearly label "not professional advice," direct serious cases to human professionals. Practical advice: (1) read App Store Guidelines 4.8 (Login Services) and 5.1 (Privacy) before first submission; (2) ensure TestFlight beta with real users; (3) prepare moderation reports (have them ready when asked); (4) conversation UI must make users aware "this is AI," don't impersonate humans.
Q5: My app already uses Firebase. Can I integrate Dialogflow directly? Will costs explode?
Yes, Firebase is one of the smoothest choices. Firebase + Dialogflow integration methods: (1) Firebase Extensions — Google provides "Dialogflow Integration" official extension, one-click install; automatically sends Firestore messages to Dialogflow for reply processing. (2) Cloud Functions for Firebase — write Cloud Function as webhook / proxy: Firebase Auth validates user → calls Dialogflow → stores conversations to Firestore. (3) Firebase Auth + Dialogflow — app logs in via Firebase Auth, gets token to send to your Cloud Function, function uses that token to identify the user. Cost breakdown (500 DAU app): (A) Firebase: Authentication free; Firestore ~$5–20/month (1M reads, 200K writes, 100K deletes); Cloud Functions free tier 2M invocations/month usually suffices; (B) Dialogflow ES: $270/month (as calculated previously); (C) Cloud Storage (if storing voice files): $5–10/month; (D) Cloud Run (webhook if not using Functions): $20–50/month; (E) Total monthly cost: $300–400/month. Cost-saving tips: (1) use Firestore over Realtime Database (Firestore cheaper); (2) Cloud Functions min instances = 0 (has cold start but saves money); (3) use up Dialogflow's 15K/month free tier; (4) only store necessary data, set 90-day TTL on conversations for auto-deletion. At scale watch out: after 10K DAU, Dialogflow fees dominate (possibly $5,000+/month); evaluate self-hosted open-source alternatives (Rasa) or cheaper LLM APIs (Gemini Flash) then.
Next Steps
After completing mobile integration, you can:
- Optimize Conversation Design: Dialogflow Intent and Context Complete Tutorial
- Develop Backend Integration: Dialogflow Fulfillment and API Integration Tutorial
- Control Costs: Dialogflow Pricing Complete Analysis
Want to Add AI Conversation Features to Your App?
Integrating AI conversations in Apps involves many technical details: API integration, security, performance optimization, review compliance...
If you need:
- Add AI customer service features to existing App
- Develop new App with voice assistant
- Design secure and reliable integration architecture
- Cross-platform (iOS + Android) solution
Book AI implementation consultation to have an experienced team help you plan and implement.
We've helped multiple Apps successfully integrate AI conversation features, consultation is completely free.
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
Dialogflow CX vs ES Complete Comparison: 2026 Version Selection Guide
What's the difference between Dialogflow CX and ES? This article compares features, pricing, and use cases in detail, with a decision flowchart to help you choose the right version without mistakes or wasting money.
DialogflowDialogflow Fulfillment and API Integration Complete Tutorial
Complete Dialogflow Webhook development tutorial: Cloud Functions deployment, DetectIntent API calls, third-party API integration. Includes Node.js and Python example code with GitHub project links.
DialogflowDialogflow Complete Guide 2026: From Beginner to Production AI Chatbot Development
Complete analysis of Google Dialogflow CX vs ES version differences, Generative AI Agents, Vertex AI integration, cost calculation, and LINE Bot integration tutorials. Build enterprise-grade AI customer service bots from scratch with 2026 latest generative AI features and practical examples.