...

What We Think

Blog

Keep up with the latest in technological advancements and business strategies, with thought leadership articles contributed by our staff.
TECH

December 8, 2025

Decorator Pattern Explained Simply

During application development, we often encounter situations where we need to add new functionalities to an existing one. However, doing so can lead to unpredictable errors. This is because when modifying the code of an old function to accommodate a new one, we need to minimize the possibility of the added functionality affecting shared variables.

In this case, you should use the Decorator design pattern to implement the modifications. Simply put, the Decorator pattern allows you to add new behavior to an object without affecting the behavior of other objects within the same class.

1. Problem

Imagine that you have a Customer with a property called Cart and behaviors like AddToCart and CheckOut. In a typical workflow, the user will add products to their Cart and then perform CheckOut. Once the CheckOut information is successful, the order details will be sent to your warehouse for shipping.


However, according to new customer demands, a faster shipping carrier is added. Now, customers can choose between the traditional shipping carrier and the faster one. At this point, you might think of modifying the CheckOut process to add a condition: if the customer selects a different shipping carrier, you will execute a different CheckOut behavior.

But if more shipping carriers are added in the future, managing the code will become increasingly difficult due to the numerous conditions and behaviors being added. Therefore, it's best if these new behaviors are added using a Decorator.

By creating a base decorator class that wraps the Customer object, we can rewrite the CheckOut behavior with the newly added processing while still preserving the original behavior of the Customer object.

2. Usage examples (Decorator Pattern)

I will write a code snippet to illustrate how to use the decorator in the above scenario as follows:

Cart.cs

namespace Decorator
{
    public class Cart
    {
        public List Products { get; private set; } = [];

        public void AddProduct(string product)
        {
            Products.Add(product);
        }

        public string Details()
        {
            return string.Join(", ", Products);
        }
    }
}

Customer.cs

namespace Decorator
{
    public abstract class Customer
    {
        public Cart Cart { get; set; } = new();

        public virtual void AddToCart(string product)
        {
            Cart.AddProduct(product);
        }

        public abstract void CheckOut();
    }
}

DecoratorCustomer.cs

namespace Decorator
{
    public abstract class DecoratorCustomer : Customer
    {
        protected Customer? _customerComponent;

        public DecoratorCustomer(Customer customerComponent)
        {
            _customerComponent = customerComponent;
        }

        public override void CheckOut()
        {
            if (_customerComponent != null)
            {
                _customerComponent.CheckOut();
            }
        }
    }
}

DecoratedCustomer.cs

namespace Decorator
{
    public class DecoratedCustomer : DecoratorCustomer
    {
        private string _deliveryProvider { get; set; } = "Default delivery";

        public DecoratedCustomer(Customer customerComponent) : base(customerComponent)
        {
        }

        public void AddDeliveryInfo(string deliveryProvider)
        {
            _deliveryProvider = deliveryProvider;
        }

        public override void CheckOut()
        {
            Console.WriteLine("Delivery information");
            // Get delivery time based on the delivery provider
            GetDeliveryInfo();
            base.CheckOut();
        }

        private void GetDeliveryInfo()
        {
            Console.WriteLine($"Delivery provider: {_deliveryProvider}");
            if (_deliveryProvider.Contains("Express"))
                Console.WriteLine("Delivery time: 1 day");
            else
                Console.WriteLine("Delivery time: 2-3 days");
        }
    }
}

ConcreteCustomer.cs

namespace Decorator
{
    public class ConcreteCustomer : Customer
    {
        public override void CheckOut()
        {
            Console.WriteLine($"Checkout information: {Cart.Details()}");
        }
    }
}

Program.cs

using Decorator;

// Default customer checkout information
ConcreteCustomer customer = new();
customer.AddToCart("Fridge");
customer.AddToCart("Washing machine");
customer.AddToCart("Oven");
customer.AddToCart("Microwave");
customer.CheckOut();
Console.WriteLine("======#==#======");

// Decorated customer checkout information
DecoratedCustomer decoratedCustomer = new(customer);
decoratedCustomer.AddDeliveryInfo("Express delivery");
decoratedCustomer.CheckOut();
Console.WriteLine("======#==#======");

The result after executing the program is as follows:

As you can see, initially the information checked out only included the Cart details. After being decorated, the checkout process can now output the shipping carrier information as well.

By implementing the above, you can customize the behaviors of an object to suit various use cases while maintaining the stability and integrity of the object. However, I also recognize that a certain level of knowledge is required to apply this architecture effectively because the overall structure can spread the functionality across multiple files, making it more challenging to read and understand. Developers need to carefully consider suitable architectures to apply to their projects to minimize potential errors.

 

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.


Reference:

  1. Decorator design parttern.
  2. Decorator example in C#   
View More
TECH

December 8, 2025

AWS Certified Cloud Practitioner (CLF-C02) - Everything You Need to Know About the Exam Outline

Discover the complete outline of the AWS Certified Cloud Practitioner CLF-C02 exam, including structure, domain weightings, and target candidates. A beginner-friendly guide to preparing for your first AWS certification.

Introduction

Are you interested in cloud computing but don't know where to start? The AWS Certified Cloud Practitioner certification is often the perfect first step into the world of Amazon Web Services (AWS) cloud technology. This entry-level certification validates your basic understanding of AWS cloud concepts without requiring any hands-on technical experience or programming skills.

The AWS Certified Cloud Practitioner (CLF-C02) exam is designed for individuals who can demonstrate overall knowledge of the AWS Cloud, regardless of their specific job role. It proves that you understand the fundamental concepts of cloud computing and how AWS can benefit businesses and individuals.

This is the first post in my series about the CLF-C02 exam. Today, we'll focus on the exam outline - giving you a complete overview of what the certification covers, who it's for, and how it's structured. Future posts will dive deeper into each of the four main content areas with more detailed explanations and study tips.

What's New in CLF-C02 vs CLF-C01?

If you've studied for the previous version (CLF-C01), the CLF-C02 has some key updates:

  • Streamlined content: Removed migration strategies and cloud adoption frameworks (now covered in higher-level exams)
  • Updated security focus: More emphasis on compliance and modern security tools
  • Enhanced global infrastructure: Better coverage of Regions, Availability Zones, and edge locations
  • Current services: Includes newer AWS offerings and pricing models

The CLF-C02 focuses more on core concepts while reducing implementation details.

Overview of the Exam

Who Should Take This Exam?

The AWS Certified Cloud Practitioner exam is targeted at people who have up to 6 months of exposure to AWS Cloud concepts through work, self-study, or casual interaction with cloud technologies. You might be:

  • Someone just starting their career in cloud computing
  • Working in IT support or operations with occasional AWS exposure
  • A business professional who needs to understand cloud basics
  • A student or career changer exploring cloud technology options

Important Note: This exam does NOT require you to perform technical tasks like coding, designing complex cloud architectures, troubleshooting systems, implementing solutions, or conducting performance testing. It's about understanding concepts, not hands-on skills.

Recommended Knowledge Areas

Before taking the exam, you should be familiar with:

  • AWS Cloud Concepts - Basic ideas about cloud computing
  • Security and Compliance in AWS - How AWS handles data protection and regulatory requirements
  • Core AWS Services - Main offerings for computing, storage, and networking
  • Economics of the AWS Cloud - Cost structures and financial benefits
  • Shared Responsibility Model - Understanding the boundaries of responsibility between AWS and customers

Exam Format and Scoring

The CLF-C02 exam consists of 65 questions (50 scored questions and 15 unscored questions that are unmarked and used by AWS for future exam development). You won't know which questions are unscored.

Question Types:

  • Multiple choice: One correct answer out of four options
  • Multiple response: Two or more correct answers out of five or more options

You have 90 minutes to complete the exam, and there's no penalty for guessing wrong answers.

Scoring System:

  • Results are reported as a scaled score from 100-1000
  • Minimum passing score is 700
  • The exam uses "compensatory scoring," meaning you don't need to pass each section individually - your overall performance across all questions determines if you pass

Content Structure - The 4 Main Domains

The exam is organized into four content domains, each with a different percentage weighting. This means some areas have more questions than others.

Exam Domains Overview:

Domain Main Content Weighting
Cloud Concepts Benefits, Well-Architected Framework, Cloud Economics 24%
Security and Compliance Shared Responsibility, IAM, Security Services 30%
Cloud Technology and Services Core services: Compute, Storage, Network, Databases 34%
Billing, Pricing, and Support Pricing models, Cost tools, Support plans 12%

Domain 1: Cloud Concepts (24% of scored content)

This domain focuses on the fundamental benefits and principles of cloud computing.

Key Topics:

  • Benefits of AWS Cloud: Understanding advantages like global reach (data centers worldwide), speed of deployment (quick setup), high availability (services stay running), elasticity (scale up/down as needed), and agility (adapt quickly to changes)
  • Design Principles: Learning about the AWS Well-Architected Framework, which includes six pillars: operational excellence (efficient operations), security (data protection), reliability (consistent performance), performance efficiency (optimal resources), cost optimization (spending wisely), and sustainability (environmental responsibility)
  • Cloud Economics: Understanding cost differences between traditional on-premises systems (fixed costs) versus cloud (variable costs), licensing options, and benefits like economies of scale (cost savings from large-scale operations)

Domain 2: Security and Compliance (30% of scored content)

Security is one of the largest domains, showing how important it is in cloud computing.

Key Topics:

  • Shared Responsibility Model: AWS and customers each handle different security aspects. For example, AWS secures the underlying infrastructure, while customers secure their data and applications
  • Security Concepts: Encryption concepts at a high level (data protection), compliance requirements, and monitoring tools
  • Access Management: Using AWS Identity and Access Management (IAM) for user permissions, including high-level understanding of IAM users, groups, roles, and permission policies, plus multi-factor authentication (MFA) for extra security
  • Security Services: Tools like AWS Shield (DDoS protection), Amazon GuardDuty (threat detection), AWS Security Hub (security monitoring), and AWS Config (resource compliance checking)

Domain 3: Cloud Technology and Services (34% of scored content)

This is the largest domain and covers AWS's core offerings.

Key Topics:

  • Deployment Methods: Using the AWS Management Console (web interface), APIs (programming interfaces), CLI (command-line tools), and infrastructure as code (automated setup)
  • Global Infrastructure: Understanding Regions (geographic areas), Availability Zones (data centers within regions), Local Zones (extensions of AWS services closer to users for low-latency workloads), and edge locations (content delivery points). Benefits include high availability through multiple Availability Zones, disaster recovery capabilities, and compliance with data sovereignty requirements
  • Compute Services: Amazon EC2 (virtual servers), container services like Amazon ECS/EKS, serverless computing with AWS Lambda, and auto scaling (automatic resource adjustment)
  • Database Services: Relational databases (Amazon RDS), NoSQL databases (DynamoDB)
  • Network Services: Amazon VPC (virtual private cloud), security groups (firewalls), Amazon Route 53 (DNS service), and connectivity options like AWS VPN
  • Storage Services: Amazon S3 (object storage), EBS (block storage for servers), EFS (file storage), and lifecycle policies (automatic data management)
  • Analytics and AI/ML: Basic awareness of services like Amazon QuickSight (data visualization) and Amazon SageMaker (machine learning) - you just need to recognize these at a high level, not detailed usage

Domain 4: Billing, Pricing, and Support (12% of scored content)

This smaller domain covers the business side of AWS.

Key Topics:

  • Pricing Models: On-Demand (pay as you go), Reserved Instances (discount for long-term commitment), Spot Instances (bid for unused capacity), and Savings Plans (flexible pricing discounts)
  • Cost Management: Using AWS Cost Explorer (spending analysis), AWS Budgets (spending alerts), and cost allocation tags (tracking expenses by department/project)
  • Support Options: Different AWS Support plans (Basic, Developer, Business, Enterprise, and Enterprise On-Ramp), AWS Marketplace (third-party tools), and resources like AWS re:Post (community forum) and Trusted Advisor (optimization recommendations)

Preparation Tips

Since this is a foundational certification, you don't need extensive technical experience. Focus on understanding concepts rather than implementation details. Here are some practical CLF-C02 study guide tips:

  1. Official Resources: Start with the free AWS whitepapers, documentation, and AWS Skill Builder training
  2. Official Sample Questions: AWS provides a free set of CLF-C02 practice questions to help you get familiar with the exam format
  3. Practice Exams: Take additional sample questions to build confidence
  4. Hands-On Practice: Use the AWS Free Tier (free services for new users) to experiment
  5. Study Time: Plan for 2-4 weeks of preparation if you're new to cloud concepts
  6. Focus Areas: Pay special attention to the Well-Architected Framework and shared responsibility model - these appear frequently

For more AWS Cloud Practitioner exam tips, check out official AWS training paths and community forums.

Conclusion

The AWS Certified Cloud Practitioner certification provides a solid foundation for anyone interested in cloud computing. With its focus on fundamental concepts across four balanced domains - Cloud Concepts, Security and Compliance, Technology and Services, and Billing and Support - it ensures you understand both the technical and business aspects of AWS.

This certification doesn't require deep technical skills, making it accessible for beginners while still being valuable for career advancement. Whether you're looking to start a cloud career, improve your current role, or simply understand modern IT infrastructure, the CLF-C02 is an excellent starting point.

In the next post, we'll dive deep into Domain 1: Cloud Concepts, exploring the specific benefits of AWS Cloud and real-world examples of how businesses leverage these advantages. Stay tuned, and feel free to ask questions about this exam outline in the comments!

Have you taken the CLF-C02 exam? What surprised you most about the content? Share your experience below!

 

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

 

References

[1]. AWS Certified Cloud Practitioner. Retrieved from https://aws.amazon.com/certification/certified-cloud-practitioner/

View More
TECH

December 8, 2025

Modern Flutter UI Design Patterns for 2025

Modern Flutter UI/UX Patterns

Flutter UI is evolving fast. In 2025, users expect smooth animations, responsive layouts, clean structure, and adaptive design across all platforms (mobile, web, desktop).
Here are the modern UI/UX patterns that every Flutter developer should follow.

Component-Driven UI (CDU)

Instead of building pages first, build reusable components:

  • Buttons
  • Cards
  • Form fields
  • Bottom sheets
  • Custom app bars

Then compose them into screens.
This reduces UI duplication and improves long-term maintainability.

Tip: Create a ui/ or components/ folder to organize everything.

Design Tokens (2025 Standard)

Your UI system should rely on:

  • Colors
  • Spacing
  • Typography
  • Corner radius
  • Shadows

All defined in one place.

Dart:

class AppSpacing {
     static const s = 8.0;
     static const m = 16.0;
     static const l = 24.0;
}

Tokens = clean, consistent design.

Advanced Theming with ColorScheme + Material 3

M3 is fully mature in 2025.
Use:

  • ColorScheme.fromSeed()
  • Dynamic color harmonization
  • Light/dark adaptive palettes

This gives your app a modern, unified look with almost no work.

Motion-First Design

Micro animations are expected in every modern app:

  • Smooth transitions
  • Subtle scaling on tap
  • Slide-in content
  • Animated icons
  • Hero animations for navigation

Use:

  • ImplicitlyAnimatedWidgets
  • AnimationController + Tween
  • Motion packages (ex: Flutter Animate)

Rule: Animation should feel invisible but make the experience smoother.

Layouts Built for Multi-Platform

Mobile-only UIs feel outdated.
2025 apps must adapt beautifully to:

  • Phones
  • Tablets
  • Desktops
  • Web

Use:

  • LayoutBuilder
  • MediaQuery
  • Breakpoints (custom or via packages)
  • ResponsiveGrid or Flexible patterns

Your UI should scale, not stretch.

Declarative Navigation (go_router or Routemaster)

Modern apps use:

  • Deep linking
  • URL-based routing
  • Typed params
  • Clean navigation stacks

go_router is the 2025 standard.

State + UI Separation

UI must be clean.
Logic must be separate.

Recommended patterns:

  • Riverpod (current best choice in 2025)
  • Bloc (enterprises still love it)
  • Clean architecture (for large-scale apps)

Your widgets should:

  • Render UI
  • Subscribe to state
  • Handle user input

Skeleton Loading + Shimmer Effects

Users expect:

  • Instant feedback
  • Placeholder UI
  • Smooth loading skeletons

Skeleton UIs became default UX for 2025 apps.

Adaptive Dark Mode

Not just dark/light toggle.
Modern apps support:

  • Device theme
  • AMOLED true black for battery saving
  • Adaptive elevation overlays

Dark mode must be a first-class citizen.

Consistency with Design Systems (Figma → Flutter)

A modern workflow is:

  • Build design system in Figma
  • Export tokens
  • Map to Flutter (ColorScheme, TextTheme, Elevation)
  • Auto-sync future updates

    Use tools like:

    • Figma Tokens
    • FlutterGen
    • Build your own mapping classes

      This ensures a unified UI between design and code.

      Conclusion

      By integrating these modern UI/UX patterns, developer can build Flutter applications that not only look contemporary but are also efficient to develop, easy to maintain, and truly delightful for users on any platform.

       

      Ready to get started?

      Contact IVC for a free consultation and discover how we can help your business grow online.

      Contact IVC for a Free Consultation
      View More
      TECH

      December 8, 2025

      Architecting Production-Ready Flutter Plugins

      Introduction

      Flutter excels at rendering a declarative UI at 60fps, but it remains a guest on the host operating system. When an application requires access to platform-specific APIs—such as low-energy Bluetooth, obscure biometric sensors, or background process management—we must bridge the gap between the Dart runtime and the native host.

      While the MethodChannel API is the foundational transport layer, building a scalable, maintainable plugin requires more than just passing strings back and forth. This post details the architecture and engineering standards for building production-grade Flutter plugins, focusing on the Federated Architecture, Type Safety, and Concurrency.

      How to implement

      The Federated Plugin Architecture

      For production systems, monolithic plugins (where Android, iOS, and Dart code reside in one package) are discouraged. The industry standard is the Federated Plugin Architecture. This pattern decouples the API definition from the platform implementations, enabling independent scalability and testing.

      The Structure

      A federated plugin is split into multiple packages, typically organized in a monorepo:

      • plugin_name (App-Facing): The entry point for consumers. It forwards calls to the default platform instance.
      • plugin_name_platform_interface: Contains abstract base classes and data models. This ensures all platform implementations adhere to the same contract.
      • plugin_name_android / plugin_name_ios: The concrete implementations for specific platforms.

      Benefits:

      • Isolating Dependencies: Android-specific Gradle dependencies do not leak into the Web or iOS implementations.
      • Testability: The platform_interface allows you to inject mock implementations during Dart unit tests without needing a simulator.

      Enforcing Type Safety with Pigeon

      The raw MethodChannel relies on Map<String, dynamic> and untyped standard message codecs. This is brittle; a typo in a map key or a mismatched data type causes runtime crashes (ClassCastException) rather than compile-time errors.

      The Solution: Pigeon.

      Pigeon is a code generation tool that creates type-safe bridges between Dart and Native code. It generates the serialization logic, ensuring that data contracts are respected across boundaries.

      Step A: Define the Schema (Dart)

      Create a standalone Dart file (e.g., pigeons/messages.dart) to define the API.

      Dart:

      import 'package:pigeon/pigeon.dart';

      @ConfigurePigeon(PigeonOptions(
           dartOut: 'lib/src/messages.g.dart',
           kotlinOut: 'android/src/main/kotlin/com/example/plugin/Messages.g.kt',
           swiftOut: 'ios/Classes/Messages.g.swift',
           kotlinOptions: KotlinOptions(package: 'com.example.plugin'),
      ))

      class CompressionConfig {
           int? quality;
           String? format; // 'jpeg' or 'png'
      }

      class CompressionResult {
           Uint8List? data;
           String? error;
      }

      @HostApi()
      abstract class ImageCompressorApi {
           @async
           CompressionResult compress(Uint8List rawData, CompressionConfig config);
      }

      Step B: Generate the Protocol

      Running the Pigeon generator produces:

      • Dart: An abstract class used by your plugin logic.
      • Kotlin: An interface (ImageCompressorApi) to implement.
      • Swift: A protocol (ImageCompressorApi) to conform to.

      Android Implementation (Kotlin)

      Modern Android plugins should be written in Kotlin and must handle lifecycle awareness and threading correctly.

      The generated Pigeon interface simplifies the setup. Note the use of Coroutines to move work off the main thread.

      Kotlin:

      import io.flutter.embedding.engine.plugins.FlutterPlugin
      import kotlinx.coroutines.*

      class ImageCompressorPlugin : FlutterPlugin, ImageCompressorApi {
           private val scope = CoroutineScope(Dispatchers.Main)

           override fun onAttachedToEngine(binding: FlutterPlugin.FlutterPluginBinding) {
                // Wire up the generated Pigeon API
                ImageCompressorApi.setUp(binding.binaryMessenger, this)
           }

           override fun compress(
                rawData: ByteArray,
                config: CompressionConfig,
                result: Result<CompressionResult>
           ) {
                // MOVE TO BACKGROUND THREAD
                scope.launch(Dispatchers.Default) {
                     try {
                          val compressedData = NativeCompressor.process(rawData, config.quality)
                          val output = CompressionResult(data = compressedData)

                          // Return to Main Thread to send result back to Flutter
                         withContext(Dispatchers.Main) {
                              result.success(output)
                         }

                     } catch (e: Exception) {
                          withContext(Dispatchers.Main) {
                          result.error(e)
                    }}
                }

           }

           override fun onDetachedFromEngine(binding: FlutterPlugin.FlutterPluginBinding) {
                ImageCompressorApi.setUp(binding.binaryMessenger, null)
                scope.cancel() // Prevent memory leaks
           }
      }

      Key Engineering Consideration: If your plugin requires Activity references (e.g., for startActivityForResult or Permissions), your plugin class must implement ActivityAware. Do not rely on the deprecated Registrar.

      iOS Implementation (Swift)

      iOS implementation follows a similar pattern using Swift protocols and Grand Central Dispatch (GCD).

      Swift:

      import Flutter
      import UIKit

      public class SwiftCompressorPlugin: NSObject, FlutterPlugin, ImageCompressorApi {

           public static func register(with registrar: FlutterPluginRegistrar) {
                let messenger = registrar.messenger()
                let api = SwiftCompressorPlugin()
                 // Wire up the generated Pigeon API
                ImageCompressorApiSetup.setUp(binaryMessenger: messenger, api: api)
           }

           func compress(
                rawData: FlutterStandardTypedData,
                config: CompressionConfig,
                completion: @escaping (Result<CompressionResult, Error>) -> Void
           ) {
                // MOVE TO BACKGROUND QUEUE
                DispatchQueue.global(qos: .userInitiated).async {
                do {
                      let data = try NativeCompressor.process(rawData.data, quality: config.quality)
                      let result = CompressionResult(data: data, error: nil)

                      // Callback is thread-safe in Pigeon generated code,
                     // but explicit main queue dispatch is good practice for UI work
                     DispatchQueue.main.async {
                            completion(.success(result))
                      }
                     } catch {
                           completion(.failure(error))
                     }
                }
           }
      }

      Performance and Concurrency

      A common bottleneck in plugin development is blocking the Platform Thread.

      • The Issue: Flutter's Platform Channels invoke native methods on the host's Main Thread (UI Thread).
      • The Consequence: If you perform JSON parsing, Bitmap decoding, or File I/O directly in the handler, the entire device UI (not just the Flutter app) will freeze (Jank).
      • The Fix: Always offload operations exceeding 16ms to a background thread (using Dispatchers.IO in Kotlin or DispatchQueue.global in Swift) immediately upon receiving the call.

      Testing Strategy

      Robust plugins require a layered testing approach.

      Unit Tests (Dart)

      Mock the platform interface. Because we decoupled the logic, we can test the Dart transformation layers without an emulator.

      Dart:

      class MockApi implements ImageCompressorApi {
           @override
           Future<CompressionResult> compress(Uint8List rawData, CompressionConfig config) async {
                return CompressionResult(data: rawData); // Echo back for testing
           }
          }

           void main() {
                 test('Controller transforms data correctly', () async {
                final api = MockApi();
                // Inject API into controller and assert logic
            });
      }

      Integration Tests (On-Device)

      Use the integration_test package to verify the full round-trip. This ensures the native compilation and linking are correct.

      Summary

      Building a plugin is not just about making it work; it is about making it safe and scalable.

      • Federate: Split your logic from your platform implementations.
      • Strict Typing: Use Pigeon to eliminate runtime serialization errors.
      • Thread Management: Never block the Main Thread; offload heavy lifting immediately.
      • Lifecycle: Manage Activity attachment and detachment cleanly to avoid leaks.

      Ready to get started?

      Contact IVC for a free consultation and discover how we can help your business grow online.

      Contact IVC for a Free Consultation
      View More
      TECH

      December 8, 2025

      Building a Smart Camera Android App Use CameraX & ML Kit

      What is CameraX?

      CameraX is a Jetpack library that makes camera development easier by providing:

      • Simple camera preview
      • Image capture (photos)
      • Image analysis (frames for machine learning)
      • Consistent behavior across different devices
      • Fewer crashes compared to Camera API / Camera2

      CameraX works perfectly with ML Kit for on-device AI tasks.

      What is ML Kit?

      ML Kit is Google’s on-device machine learning library. It’s fast, doesn’t require internet, and supports:

      • Text Recognition (OCR)
      • Face Detection
      • Barcode Scanning
      • Object Detection & Tracking
      • Image Labeling
      • Pose Detection

      In this post, Text Recognition will be used as an example — but the same structure works for any ML Kit model.

      Step-by-Step Project Guide

      Step 1: Create a New Android Studio Project

      • Open Android Studio (Giraffe / Koala+ recommended)
      • Click New Project
      • Choose Empty Activity (or Compose Activity if you're building with Jetpack Compose)
      • Configure project settings:
        • Name: SmartCameraApp
        • Package: com.example.smartcamera
        • Minimum SDK: Android 8.0 (API 26) or higher
        • Build system: Gradle Kotlin DSL recommended
      • Click Finish

      The project will generate with a default MainActivity.kt.

      Step 2: Add Required Dependencies

      Open build.gradle (app level) and insert:

      CameraX

      def camerax_version = "1.3.3"

      implementation "androidx.camera:camera-core:$camerax_version"
      implementation "androidx.camera:camera-camera2:$camerax_version"
      implementation "androidx.camera:camera-lifecycle:$camerax_version"
      implementation "androidx.camera:camera-view:$camerax_version"
      implementation "androidx.camera:camera-mlkit-vision:$camerax_version"

      ML Kit — Text Recognition (example)

      implementation("com.google.mlkit:text-recognition:16.0.1")

      Step 3: Add Camera Permission

      Add the permission to AndroidManifest.xml:

      <uses-permission android:name="android.permission.CAMERA" />

      Request it at runtime using the Activity Result API.

      Step 4: Add a Camera Preview UI

      Open activity_main.xml:

      <androidx.camera.view.PreviewView
          android:id="@+id/previewView"
          android:layout_width="match_parent"
          android:layout_height="match_parent"
          android:layout_gravity="center" />

      PreviewView is the recommended CameraX preview component.

      Step 5: Request Camera Permission

      In MainActivity.kt:

      private val requestPermissionLauncher =
       registerForActivityResult(ActivityResultContracts.RequestPermission()) { isGranted ->
              if (isGranted) startCamera()
              else Toast.makeText(this, "Camera permission required.", Toast.LENGTH_LONG).show()
      }

      private fun requestCameraPermission() {
          requestPermissionLauncher.launch(Manifest.permission.CAMERA)
      }

      Call requestCameraPermission() inside onCreate().

      Step 6: Start CameraX

      private fun startCamera() {
          val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
          cameraProviderFuture.addListener({
              val cameraProvider = cameraProviderFuture.get()

              // Preview setup
              val preview = Preview.Builder().build().apply {
                  setSurfaceProvider(findViewById<PreviewView>(R.id.previewView).surfaceProvider)
              }

              // Image analysis setup
              val analysis = ImageAnalysis.Builder()
                  .setTargetResolution(Size(1280, 720))
                  .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                  .build()
                  .apply {
                      setAnalyzer(
                          Executors.newSingleThreadExecutor(),
                          MlKitAnalyzer()
                      )
                  }

              val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
              cameraProvider.unbindAll()
              cameraProvider.bindToLifecycle(
                  this,
                  cameraSelector,
                  preview,
                  analysis
              )
          }, ContextCompat.getMainExecutor(this))
      }

      Step 7: Implement the ML Kit Analyzer

      Create a new file MlKitAnalyzer.kt

      class MlKitAnalyzer : ImageAnalysis.Analyzer {
          private val recognizer = TextRecognition.getClient()
          override fun analyze(imageProxy: ImageProxy) {
              val mediaImage = imageProxy.image ?: return
              val rotation = imageProxy.imageInfo.rotationDegrees
              val inputImage = InputImage.fromMediaImage(mediaImage, rotation)
              recognizer.process(inputImage)
                  .addOnSuccessListener { result ->
                      Log.d("MLKit", "Detected Text: ${result.text}")
                  }
                  .addOnFailureListener { e ->
                      Log.e("MLKit", "Error: ${e.localizedMessage}")
                  }
                  .addOnCompleteListener {
                      imageProxy.close()
                  }
          }
      }

      Step 8: Optional Overlay Rendering

      (Used for bounding boxes, detection highlights, face frames)

      Overlay View (optional)

      class OverlayView(context: Context, attrs: AttributeSet? = null) : View(context, attrs) {
          var boxes: List<Rect> = emptyList()
              set(value) {
                  field = value
                  invalidate()
              }

          private val paint = Paint().apply {
              color = Color.GREEN
              style = Paint.Style.STROKE
              strokeWidth = 4f
          }

          override fun onDraw(canvas: Canvas) {
              super.onDraw(canvas)
              boxes.forEach { canvas.drawRect(it, paint) }
          }
      }

      Add to layout:

      <com.example.smartcamera.OverlayView
          android:id="@+id/overlayView"
          android:layout_width="match_parent"
          android:layout_height="match_parent" />

      Step 9: Performance and Architecture Considerations

      • Use a single analyzer thread: Using Executors.newSingleThreadExecutor() prevents backpressure.       
      • Set appropriate resolution: 1280×720 provides a balance of speed and detail.
      • Reuse ML Kit detector: Avoid creating new ML Kit instances per frame.
      • Move ML logic into a ViewModel for large apps: Ensures clean architecture and testability.

      Conclusion

      By combining CameraX with ML Kit, Android developers can build intelligent, production-ready camera applications with minimal complexity. The stack provides:

      • A modern, reliable camera pipeline.
      • High-performance on-device ML processing.
      • Clean integration with Jetpack architecture components.
      • Flexibility for a range of detection and recognition tasks.

      This approach is ideal for apps involving OCR, barcode scanning, identity verification, inventory automation, smart forms, and more.

      Ready to get started?

      Contact IVC for a free consultation and discover how we can help your business grow online.

      Contact IVC for a Free Consultation

      Reference:

      https://developer.android.com/media/camera/camerax

      https://developers.google.com/ml-kit/guides

       

      View More
      TECH

      December 8, 2025

      Clean Code with Vue.js

      Organize Your Project for Scalability

      A messy folder structure leads to messy code. Clean code begins with a clean project layout.

      Recommended folder structure:

      src/
            assets/
            components/
            composables/
            constants/
            services/
            stores/
            utils/
            views/
            router/
            App.vue
            main.js
      Folder / File Purpose
      src/ Root directory containing all frontend source code.
      assets/ Static assets such as images, fonts, icons, global styles (CSS/SCSS).
      components/ Reusable shared UI components used across the project.
      composables/ Reusable logic using Vue Composition API (e.g., useFetch, useAuth).
      constants/ Centralized enums, configuration values, status codes.
      services/ API services, HTTP logic, business service abstraction.
      stores/ Global state management using Pinia.
      utils/ Helper functions, validators, formatters, pure functions.
      views/ Page-level components rendered by Vue Router.
      router/ Vue Router configuration, routing tables, navigation guards.
      App.vue Root Vue component that hosts the entire application layout.
      main.js Application entry point: initializes app, mounts router & store.

      This separation prevents bloated components and encourages reuse.

      Use Consistent Naming Conventions

      Naming is one of the most important aspects of clean code. So, we need to follow the naming rules:

      Component naming: should use PascalCase

      Example: UserProfileCard.vue, LoginForm.vue, SidebarMenu.vue

      File naming for composables: should use camelCase

      Example: useUser.js, useAuth.js, usePagination.js

      Method naming: should describe intention:

      Example:
      ❌ Not good
      doPrice()
      check()
      getData()

      ✅️ Good
      calculateTotalPrice()
      validateForm()
      fetchUserProfile()

      Avoid abbreviations

      Example:
      ❌ Not good
      cfg, usr, prd

      ✅️ Good
      config, user, product

      Keep Components Small and Focused (Single Responsibility Principle)

      A clean Vue component should contain:

      ✅️ The template
      ✅️ Local UI logic
      ✅️ Minor state transitions

      It should not contain:
      ❌ Data fetching logic
      ❌ Business rules
      ❌ Repetitive utilities
      ❌ Large computed logic
      ❌ Large watchers

      When a component grows >300 lines, we should refactor it.

      Use Composables to Extract Reusable Logic

      Example: Before (bloated component)
      <script setup>
      import { ref, onMounted } from 'vue'

      const users = ref([])
      const loading = ref(false)

      async function loadUsers() {
      loading.value = true
      const res = await fetch('/api/users')
      users.value = await res.json()
      loading.value = false
      }

      onMounted(() => loadUsers())
      </script>

      Problems:
      ❌ Hard to reuse
      ❌ Hard to test
      ❌ Component becomes large

      After:  using composable to clean code
      useUsers.js
      import { ref, onMounted } from 'vue'

      export function useUsers() {
      const users = ref([])
      const loading = ref(false)

      async function loadUsers() {
      loading.value = true
      const response = await fetch('/api/users')
      users.value = await response.json()
      loading.value = false
      }

      onMounted(loadUsers)

      return { users, loading, loadUsers }
      }

      UserList.vue
      <script setup>
      import { useUsers } from '@/composables/useUsers'
      const { users, loading } = useUsers()
      </script>

      Benefits:
      ✅️ Reusable in multiple components
      ✅️ Testable
      ✅️ Cleaner component

      Keep Templates Clean and Expressive

      Templates should read almost like HTML.

      ❌ Bad Template
      <p>{{ price - discount > 0 ? price - discount : 0 }}</p>

      ✅️ Clean Template (Move logic to computed)
      const finalPrice = computed(() => Math.max(price.value - discount.value, 0))

      <p>{{ finalPrice }}</p>

      Diagram: Template Logic Separation
      Template → simple expressions
      Script → complex logic
      Composable → business logic

      Use Centralized State Management Wisely (Pinia Recommended)

      State should not float around components in unpredictable ways.
      Pinia is the recommended state solution.

      ✅️ Example Pinia Store
      import { defineStore } from 'pinia'

      export const useCartStore = defineStore('cart', {
         state: () => ({
         items: []
         }),
         actions: {
         addItem(item) {
         this.items.push(item)
         },
         removeItem(id) {
         this.items = this.items.filter(i => i.id !== id)
         }
       }
      })

      This keeps components lightweight.

      Avoid Magic Strings and Use Constants

      Magic strings make code fragile.

      ❌ Bad
      if (status === "SUCCESS_001") {
      ...
      }

      ✅️ Good
      status.js
      export const STATUS = {
      SUCCESS: 'SUCCESS_001',
      FAILED: 'FAILED_999'
      }

      Usage:
      if (status === STATUS.SUCCESS) {...}

      Add Proper Documentation and Comments (When Needed)

      Do not write comments that explain the obvious.
      Document intention, not implementation.

      ✅️ Good JSDoc:

      /**
      * Formats an ISO date into DD/MM/YYYY format.
      */
      function formatDate(dateStr) { ... }

      Add Unit Tests to Reinforce Clean Architecture

      Testing ensures your clean code stays clean as the project evolves.
      Recommended tools:
      Vitest
      Vue Test Utils

      Conclusion

      Clean code in Vue.js is not about perfection—it’s about clarity, intention, and long-term maintainability.
      By applying the principles covered in this guide:
      - Organizing your project
      - Naming consistently
      - Keeping components small
      - Using composables
      - Keeping templates clean
      - Centralizing shared state
      - Using testing tools
      Your Vue.js applications will become more maintainable, scalable, and enjoyable for everyone on the team.

      [References]

      https://javascript.plainenglish.io/7-vue-js-tricks-that-will-instantly-clean-up-your-code-7400cfa2e961

      Image source: https://www.freepik.com/free-photo/turned-gray-laptop-computer_12661377.htm

      Ready to get started?

      Contact IVC for a free consultation and discover how we can help your business grow online.

      Contact IVC for a Free Consultation
      View More
      TECH

      December 8, 2025

      How to Create and Manage Translation Files

      If you come from a web development background, seeing a .ts file extension might immediately make you think of TypeScript. However, in the world of C++ and the Qt Framework, .ts stands for Translation Source.

      If you want your application to reach a global audience, you cannot hard-code your strings in just one language. You need Internationalization (i18n).

      In this guide, we will walk you through the entire workflow of creating and managing Qt translation files, taking your app from a single language to a multilingual powerhouse.

      Prepare Your Code (Marking Strings)

      Before generating any files, you must tell Qt which strings in your application need to be translated. Qt doesn't guess; it looks for specific markers.

      For C++ Files (.cpp, .h)

      // Bad: Hard-coded

      QString text = "Hello World";

      // Good: Translatable

      QString text = QObject::tr("Hello World");

      For QML Files (.qml)

      Use the qsTr() function.

      Text {
          // Bad
          text: "Hello World"

          // Good
          text: qsTr("Hello World")
      }

      Configure the Project File

      Next, you need to define where the translation files will be stored. This step differs slightly depending on your build system.

      Using qmake (.pro)

      Add the TRANSLATIONS variable to your project file. This tells Qt what target languages you plan to support (e.g., Vietnamese and Japanese).

      # MyProject.pro

      TRANSLATIONS += languages/app_vi.ts \ languages/app_ja.ts

      Using CMake (CMakeLists.txt)

      If you are using Qt 6 and CMake, the setup is slightly more modern using qt_add_translations:

      # CMakeLists.txt

      find_package(Qt6 6.5 REQUIRED COMPONENTS Quick LinguistTools)

      qt_add_translations(appMyProject
          TS_FILES
          languages/app_vi.ts
          languages/app_ja.ts
      )

      Generate the .ts Files (The lupdate Step) 

      This is the core of our tutorial. You don't create .ts files manually; you generate them. The tool lupdate scans your C++ and QML source code, finds every string wrapped in tr() or qsTr(), and extracts them into an XML format.

      Via Qt Creator (Only for qmake)

      1. Open your project in Qt Creator.
      2. Go to the menu bar: Tools > External > Linguist > Update Translations (lupdate).
      3. Qt Creator will scan your code and create the .ts files in your project directory.

        Via Command Line (Terminal)

        Navigate to your project folder and run:

        # For qmake users
        \path\to\Qt\6.8.3\msvc2022_64\bin\lupdate MyProject.pro

        # For CMake users, you usually build the 'update_translations' target

        rmdir /s /q build

        cmake -S . -B build -DCMAKE_PREFIX_PATH="\path\to\Qt\6.8.3\msvc2022_64"

        cmake --build build --target update_translations

        Translate with Qt Linguist

        Now that you have the .ts files, it’s time to translate.

        1. Open the file (e.g., app_vi.ts) using Qt Linguist (installed with Qt).

        2. On the left, you will see a list of strings found in your code.
        3. Select a string, type the translation in the bottom pane, and mark it as "Done" (click the ? icon to turn it into a green checkmark).



        4. Save the file.

          Compile to Binary (.qm Files)

          Your application does not read .ts files directly because they are text-based (XML) and slow to parse. You must compile them into compact binary files (.qm) .

          Using qmake (.pro)

          In Qt Creator: Go to Tools > External > Linguist > Release Translations (lrelease).

          This will generate app_vi.qm and app_ja.qm. These are the files you will actually deploy with your app.

          Using CMake (CMakeLists.txt)

          Navigate to your project folder and run:

          # For CMake users

          cmake --build build --target release_translations

          Load the Translation in Your App

          Finally, you need to tell your application to load the generated .qm file when it starts.

          Add this logic to your code:

          QTranslator translator;
          // Load the compiled binary translation file
          // ideally from the resource system (:/)
          if (translator.load(":/app_vi.qm")) {
               app.installTranslator(&translator);
          }

          Conclution

          Internationalization (i18n) might seem like a daunting task when you are just starting out, but Qt provides one of the most robust workflows in the C++ ecosystem to handle it.

          By following this guide, you have moved away from hard-coding strings and adopted a professional workflow:

          1. Marking your code with tr().
          2. Automating extraction with lupdate.
          3. Compiling efficient binaries with lrelease.

            Ready to get started?

            Contact IVC for a free consultation and discover how we can help your business grow online.

            Contact IVC for a Free Consultation
            View More
            TECH

            December 8, 2025

            Guide to Creating Integration Tests for Terraform Code

            Welcome to this article! If you are working with Terraform to manage cloud infrastructure such as AWS, writing integration tests is an important step to verify that your code works correctly with the real provider (for example, creating actual resources on the cloud). In this blog, I will guide you through creating integration tests (real-world application to check actual resources) for a simple Terraform source code: provisioning an EC2 instance via a module. We will use the Terraform Test feature (available from Terraform 1.6 onwards) to apply real code, validate outputs, and automatically destroy resources afterwards.

            Project Structure

            The source code we are working with follows a basic structure:

            • main.tf: The main file that calls the module and sets up the provider.
            • modules/ec2_instance/: The module that provisions the EC2 instance.
            • tests/integration.tftest.hcl: The integration test file.

            1. Set Up Basic Terraform Source Code

            First, create the project folder structure and files.

            1.1 Create main.tf

            This file configures the AWS provider and calls the EC2 module.

            terraform {
              required_providers {
                aws = {
                  source  = "hashicorp/aws"
                  version = "~> 5.0"
                }
              }
            }
            
            provider "aws" {
              region = "us-east-1"
            }
            
            module "web_server" {
              source = "./modules/ec2_instance"  # Adjust path if needed
            
              env_name      = "dev"
              instance_type = "t3.micro"
              ami_id        = "ami-0fa3fe0fa7920f68e"  # Replace with a valid AMI ID for your region
            }
            
            output "server_ip" {
              value = module.web_server.public_ip
            }

             

            1.2 Create the ec2_instance Module

            Inside modules/ec2_instance, add the following files:

            main.tf

            resource "aws_instance" "web" {
              ami           = var.ami_id
              instance_type = var.instance_type
            
              tags = {
                Name = "web-server-${var.env_name}"
              }
            }
            
            output "public_ip" {
              value = aws_instance.web.public_ip
            }
            
            output "instance_type" {
              value = aws_instance.web.instance_type
            }
            
            output "ami" {
              value = aws_instance.web.ami
            }
            
            output "instance_state" {
              value = aws_instance.web.instance_state
            }

             

            variables.tf

            variable "env_name" {
              description = "Environment name"
              type        = string
            }
            
            variable "instance_type" {
              description = "EC2 instance type"
              type        = string
            }
            
            variable "ami_id" {
              description = "AMI ID for the EC2 instance"
              type        = string
            }

             

            Note: We add outputs such as instance_type, ami, and instance_state to make it easier to validate in integration tests (since internal resources are not directly exposed, and state is a computed value from AWS).

            2. Introduction to Terraform Test

            Terraform Test is an integrated framework for writing tests for Terraform code. For integration tests, it allows:

            • Real apply: Create actual resources on the cloud and validate them, with automatic destroy after the test finishes.
            • Run blocks: Execute commands such as apply or plan in a test environment.
            • Assertions: Validate conditions on outputs or resources.

            To run tests, you need Terraform >= 1.6. Run terraform test from the project root. Tests will automatically run all .tftest.hcl files inside the tests/ folder.

            3. Write Integration Test (Real Apply)

            Integration tests apply real code to AWS, create actual resources, validate them, and destroy them afterwards. This helps verify that the code works correctly with the real provider.

            Important notes:

            • You need valid AWS credentials (e.g., via environment variables such as AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY).
            • This test will create (and destroy) real resources, so it may incur small costs (EC2 free tier may be free).

            Create a tests folder and add integration.tftest.hcl.

            run "integration_test_web_server" {
              command = apply
            
              assert {
                condition     = module.web_server.public_ip != null && module.web_server.public_ip != ""
                error_message = "Public IP should not be null or empty after apply."
              }
            
              assert {
                condition     = module.web_server.instance_type == "t3.micro"
                error_message = "Instance type does not match expected value."
              }
            
              assert {
                condition     = module.web_server.ami == "ami-0fa3fe0fa7920f68e"
                error_message = "AMI ID does not match expected value."
              }
            
              assert {
                condition     = module.web_server.instance_state == "running"
                error_message = "EC2 instance should be in running state."
              }
            }
            

             

            Explanation of the Integration Test File

            • run block: Executes apply with the real provider (no mock). Terraform will create an actual EC2 instance.
            • Assertions: Validate outputs (e.g., public_ip is not null, since now it’s a real value from AWS). Assertion on instance_state verifies the instance is running (a computed value from AWS).
            • Automatic cleanup: After the test finishes (pass or fail), Terraform will destroy the resource to avoid leftovers.

            To run the integration test:

            terraform test -filter=integration.tftest.hcl

            Or run all tests:

            terraform test

            4. Run Integration Test

            Run the command from the project root:

            taipham@Tais terraform2 % terraform test
            tests/intergration.tftest.hcl... in progress
              run "integration_test_web_server"... pass
            tests/intergration.tftest.hcl... tearing down
            tests/intergration.tftest.hcl... pass
            tests/unit.tftest.hcl... in progress
              run "test_web_server_intance_type"... pass
              run "test_web_server_module_ami"... pass
              run "test_web_server_module_ami_ip"... pass
            tests/unit.tftest.hcl... tearing down
            tests/unit.tftest.hcl... pass
            
            Success! 4 passed, 0 failed.

             

            5. Best Practices for Integration Tests in Terraform

            • Limit usage: Use integration tests to verify real behavior, but limit them to avoid high costs (run in CI/CD with a dev account).
            • Meaningful assertions: Validate inputs, outputs, tags, and resource states (such as "running").
            • Integrate with CI/CD: Run terraform test in pipelines (e.g., GitHub Actions) for automation.
            • Advanced usage: Use expect_failures for negative tests or test multiple scenarios via variables.
            • Combine with unit tests: Add unit tests with mocks for faster checks; here we focus on integration.

            Conclusion

            With integration tests, you can verify that your Terraform code works correctly in real environments without leaving leftover resources. In this example, we tested an EC2 instance with minimal cost.

            Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

            [References]

            https://developer.hashicorp.com/terraform
            https://www.sudeepa.com/?p=382 [Image link]

            View More
            TECH

            December 8, 2025

            Guide to Creating Unit Tests for Terraform Code

            If you are working with Terraform to manage cloud infrastructure such as AWS, writing unit tests is an important step to ensure your code works as expected without deploying real resources. In this blog, I will guide you through creating unit tests for a simple Terraform source code: provisioning an EC2 instance via a module. We will use the Terraform Test feature (available from Terraform 1.6 onwards) to mock providers and validate outputs.

            Project Structure

            The source code we are working with follows a basic structure:

            • main.tf: The main file that calls the module and sets up the provider.
            • modules/ec2_instance/: The module that provisions the EC2 instance.
            • tests/unit.tftest.hcl: The test file with mocks and assertions.

            1. Set Up Basic Terraform Source Code.

            First, create the project folder structure. 

            1.1 Create main.tf.

            This file configures the AWS provider and calls the EC2 module:

            terraform {
              required_providers {
                aws = {
                  source  = "hashicorp/aws"
                  version = "~> 5.0"
                }
              }
            }
            
            provider "aws" {
              region = "us-east-1"
            }
            
            module "web_server" {
              source = "./modules/ec2_instance"  # Adjust path if needed
            
              env_name      = "dev"
              instance_type = "t3.micro"
              ami_id        = "ami-0fa3fe0fa7920f68e"  # Replace with a valid AMI ID for your region
            }
            
            output "server_ip" {
              value = module.web_server.public_ip
            }

             

            1.2 Create the ec2_instance Module.

            Inside modules/ec2_instance, add the following files:

            main.tf

            resource "aws_instance" "web" {
              ami           = var.ami_id
              instance_type = var.instance_type
            
              tags = {
                Name = "web-server-${var.env_name}"
              }
            }
            
            output "public_ip" {
              value = aws_instance.web.public_ip
            }
            
            output "instance_type" {
              value = aws_instance.web.instance_type
            }
            
            output "ami" {
              value = aws_instance.web.ami
            }

             

            variables.tf

            
            variable "env_name" {
              description = "Environment name"
              type        = string
            }
            
            variable "instance_type" {
              description = "EC2 instance type"
              type        = string
            }
            
            variable "ami_id" {
              description = "AMI ID for the EC2 instance"
              type        = string
            }

             

            2. Introduction to Terraform Test.

            Terraform Test is an integrated framework for writing tests for Terraform code. It allows you to:

            • Mock providers: Simulate providers (like AWS) to avoid creating real resources, saving cost and time.
            • Run blocks: Execute commands such as apply or plan in a test environment.
            • Assertions: Validate conditions on outputs or resources.

            To run tests, you need Terraform >= 1.6. Run terraform test from the project root.

            3. Write Unit Tests

            Create a tests folder in the project root and add unit.tftest.hcl.

            mock_provider "aws" {
              alias = "mock"
            
              mock_resource "aws_instance" {
                defaults = {
                  id         = "i-1234567890abcdef0"
                  public_ip  = "192.0.2.1"
                  private_ip = "10.0.0.1"
                  arn        = "arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
                }
              }
            }
            
            run "test_web_server_instance_type" {
              command = apply
            
              providers = {
                aws = aws.mock
              }
            
              assert {
                condition     = module.web_server.instance_type == "t3.micro"
                error_message = "Instance type does not match expected value."
              }
            }
            
            run "test_web_server_module_ami" {
              command = apply
            
              providers = {
                aws = aws.mock
              }
            
              assert {
                condition     = output.server_ip == "192.0.2.1"
                error_message = "The public IP output does not match the expected mocked value."
              }
            }
            
            run "test_web_server_module_ami_ip" {
              command = apply
            
              providers = {
                aws = aws.mock
              }
            
              assert {
                condition     = output.server_ip == "192.0.2.1"
                error_message = "The public IP output does not match the expected mocked value."
              }
            }

             

            Explanation of the Test File

            • mock_provider: Simulates the AWS provider. We mock the aws_instance resource with default values (like public_ip) so Terraform does not call real AWS APIs.
            • run block: Executes apply with the mock provider. No variables are needed since they are hardcoded in main.tf.
            • assert: Validates outputs from the module and root configuration. If the condition fails, the test fails with the given error message.

            4. Run Unit Tests

            The result after running the command terraform test.

            taipham@Tais terraform2 % terraform test
            tests/unit.tftest.hcl... in progress
              run "test_web_server_intance_type"... pass
              run "test_web_server_module_ami"... pass
              run "test_web_server_module_ami_ip"... pass
            tests/unit.tftest.hcl... tearing down
            tests/unit.tftest.hcl... pass
            
            Success! 3 passed, 0 failed.

             

            5. Best Practices for Unit Tests in Terraform

            • Isolate tests: Test modules individually if possible (place test files inside the module folder).
            • Mock only what’s necessary: Keep mocks simple to avoid complexity.
            • Meaningful assertions: Validate inputs, outputs, and tags to ensure correctness.
            • Integrate with CI/CD: Run terraform test in pipelines (e.g., GitHub Actions) for automation.
            • Advanced usage: Use expect_failures for negative tests or test multiple scenarios with variables.

            Conclusion

            With unit tests, you can be confident that your Terraform code works correctly before applying it to real infrastructure. In this example, we tested an EC2 instance without incurring AWS costs.

            Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

            [References]

            https://developer.hashicorp.com/terraform
            https://www.sudeepa.com/?p=382 [Image link]

            View More
            TECH

            December 8, 2025

            Automation testing with Cursor AI

            In late October 2025, Cursor released a new feature called Browser. The browser is now embedded within the editor, featuring powerful new tools for component selection, full developer tools, and MCP controls for agents. Agent in Cursor can use web browser to test web site, audit accessibility, convert designs into code, and more. Automated testing is one of Use cases that we will discuss in this topic.

            1. Context

            I have a Sign In form and Forgot Password form and I want to create automation test follow case:

            • Fill out forms with test data.
            • click through workflows.
            • test responsive designs.
            • validate error messages.
            • and monitor console for JavaScript errors.

            Forgot Password?

            Previously, we were required to write test code using frameworks such as Selenium, which made the process of developing automation tests significantly time-consuming. But now with Cursor, we can approach automation testing in a much simpler way.

            2. Automation testing with Cursor AI

            Agent (Cursor AI) can execute comprehensive test suites and capture screenshots for visual regression testing.

            To create automation test follow above request,  Simply put, I just need to write a prompt like this:

            @browser Fill out forms with test data, click through workflows, test responsive designs, validate error messages, and monitor console for JavaScript errors

             

            cursor test

            You will see Cursor's testing progress, on the right side and the test will be running on the browser.

            test process

            And testing report as below

            test report

            3. Security

            The browser runs as a secure web view and is controlled by an MCP server running as an extension. Multiple layers of protection safeguard you against unauthorized access and malicious activity. Cursor's Browser integrations have also been audited by numerous external security experts. Detail

            4. Conclude

            Although using Cursor AI for automation testing takes less time than writing code, but we still need to consider the cost of each AI test run (including re-runs, screen count, models, etc.).

             

            Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

             

            Refer 

            https://cursor.com/docs/agent/browser#automated-testing

            View More
            1 2 3 23
            Let's explore a Partnership Opportunity

            CONTACT US



            At ISB Vietnam, we are always open to exploring new partnership opportunities.

            If you're seeking a reliable, long-term partner who values collaboration and shared growth, we'd be happy to connect and discuss how we can work together.

            Add the attachment *Up to 10MB