Categories
Blog Mobile App Development

Android Vs iOS App Development – Which is Right for Your Business?

 

For ages, it has been always difficult for businesses to choose between Android or iOS platforms for their mobile app development needs. If iOS is said to bring in more revenue, then Android is said to enjoy the largest market share. Both platforms are unique in their own sense.

 

Android vs iOS – Understanding the Mobile Technologies 

 

Over the last few years, the mobile app development arena has grown significantly. For businesses, this means there are many opportunities to consider. However, the most common dilemma that businesses face is choosing the right mobile app development platform between Android and iOS. Now this can be hard as both of them come with their own set of pros and cons.  

With more businesses turning mobile-centric, it would be good for them to know which mobile platform can benefit them more as compared to another one.  

Here, in this post, we will see both Android vs iOS platforms in detail and then decide which can offer you more benefits as compared to the other one. 

 

What is iOS app development?  

 

iOS is a mobile app development platform that has been developed and maintained by Apple. This operating system had been specifically developed for Apple’s devices like iPhones and iPods. To develop applications using iOS, developers used C, C++, Swift, and Objective C programming languages. Of all these, Swift was used the most.

 

What is Android app development? 

 

Android platform is used to create applications that run on Android-powered devices. Android applications are developed for various purposes and for different industries. These applications are developed using programming languages like Kotlin or Java.

User Demographics and Market Share

 

It would be wise to know about user demographics and market share of Android and iOS apps. A better understanding of these things will help app developers and marketers to pinpoint and reach their target audience effectively. Here, we will see how market share and user demographics impact the success of an iOS and Android app.

User demographics of Android and iOS apps 

 

There is a significant difference in the user demographics of Android and iOS users. In the general case, the users of iOS devices are wealthy, young, and more educated while Android users cover larger demographics and usually belong to lower income groups as compared to iOS device users. 

Market share of Android and iOS apps 

 

As per statistics from Statista, Android is the leading mobile app platform even in 2023’s second quarter and it enjoys a market share of 70.8%. Now if we look at iOS, it enjoys a 28.4% market share for the same quarter. However, if we look at the market share region-wise, iOS enjoys a higher rate in regions like the United States.

How is an app success influenced by user demographics and market share? 

 

It is crucial for app developers and marketers to have a clear idea of the user demographics and market share when they are trying to reach their target audience base. If you are looking for an app that targets a larger mass with low-income groups, then you should consider Android app development and plan out your marketing efforts accordingly. Now, on the other hand, if your app is targeting a wealthier, young crowd that believes in easy spending, you should invest in iOS app development and plan out your marketing strategy accordingly.

Languages Vs. Tools Used App Development 

 

Android and iOS are different mobile operating systems that make use of different languages and tools for carrying out app development. So, to help you out in selecting the right app platform for your business, let’s compare the tools and languages used for Android and iOS app development.

Languages used for Android app development

 

Java is the most common language used for Android app development. Apart from that, Kotlin is used in the modern-day to build Android apps. Compared to Java, Kotlin is more expressive and concise.

Tools used for Android app development

 

Android development is majorly carried out using Android Studio. It is an IDE and is built on IntelliJ IDEA and comes with debugging tools, a code editor, and an Android emulator. To design user interfaces, Android Studio comes with a graphical layout editor.

Languages used for iOS app development

 

Swift, which has been developed by Apple, is specifically used for carrying out iOS app development. As compared to Objective-C, which was previously used for iOS app development, Swift is faster, safer, and easy to learn and apply.

Tools used for iOS app development

 

Majorly, iOS app development is carried out using Xcode. It is an IDE that comes with debugging tools, a code editor, and an iOS simulator. To design user interfaces, Xcode comes with a graphical tool named Interface Builder. 

Security And Privacy Features

 

Whether to choose Android or iOS has always been a point of debate. The selection of platforms also depends on the security and privacy features they offer.  

iOS security and privacy features 

 

iOS comes with robust security features. Apple follows a strict review policy when it comes to publishing the apps on their App Store. Moreover, iOS ensures that the apps follow secure connections for data transmission by having a feature called “App Transport Security”. Apart from that, the “Sandboxing” feature of iOS limits apps from accessing the resources present on the device.  

Just like security, Apple gives equal importance to privacy. For the same reason, Apple implemented a feature called “App Tracking Transparency”. This feature made the apps request the permission of users to track their data over websites or apps. Moreover, developers were made to disclose the kind of data they collected and used from the users by introducing a feature called “Privacy Labels”. 

Android security and privacy features 

 

Over the years, the security features of Android have been updated multiple times. Similar to that of Apple, Google has also implemented an app review process before publishing one on the Google Play Store. Again, to check apps for security threats and malware, Google has introduced the “Google Play Protect” feature. Furthermore, Google also restricts the kind of information that is accessed from the devices by introducing “Application Sandboxing.” 

To scan apps for privacy threats, Google has implemented the “Google Play Protect” feature. The “Permissions” feature of Android lets users control what amount of data can be accessed by the apps.

Integration With Third-party Devices

 

Different approaches 

When it comes to integration with third-party devices, Android and iOS have different approaches. iOS lets you build applications for a limited number of devices as it has a closed ecosystem. On the other hand, Android lets you build applications for many devices with variations as it is an open ecosystem.  

Device compatibility 

Before going ahead with development, investigate the third-party devices into which you want to integrate your app. Check whether developers have access to the API or SDK of the devices or not. Knowing that will help to know whether it will be easy or difficult to integrate the app into the device.  

Framework selection 

Whether it will be easy or tough to integrate an app with third-party devices can be known by the framework you choose. For iOS, mostly Apple’s Core Bluetooth framework is used as it let applications seamlessly communicate with Bluetooth-enabled devices. In the case of Android, Android Open Accessory Development Kit is used as it helps have easy communication with Bluetooth and USB devices. 

Stay up to date with changes 

Android and iOS go through constant changes and upgrades. Such changes will affect the integration with third-party devices. This means it is important to regularly look for updates and check for APIs or SDKs if they are available.

Pros & Cons of Android and iOS App Platforms 

 

Pros & Cons of Android and iOS App

 

In a nutshell of Android vs iOS App Development

 

Currently in the market, Android and iOS are considered to be the two major mobile operating systems. Android is an open-source platform and so is available on a variety of devices coming from manufacturers like Google, Samsung, etc. While iOS developed by Apple is limited to its iPhones and iPads.  

When choosing a platform between Android and iOS, you need to consider a few things. Android offers many customization options and is more flexible as compared to iOS. The devices with Android are available at different prices. As compared to iOS devices, Android devices have more battery life too.  

iOS devices, on the other end, have strong hardware, are more secure, and offer unmatched user experience. It is used by many businesses as it easily integrates with a number of productivity apps like Keynote, Pages, and Numbers.  

Ultimately, whether to choose iOS or Android completely depends on the preferences and requirements of the business.

Categories
Blog Mobile App Development

How to build a QR code scanner app using Google ML Kit and CameraX?

One of the most common and asked-for functions in a mobile application is a QR code scanner. QR codes and bar codes work as an effective way of passing information to people using an app.
Here in this post, we will see how we can build a QR code scanner app using Google ML Kit and Camera X.

What is CameraX?

CameraX is a part of the Jetpack support library. It provides an easy-to-use and consistent API surface which works equally well on most Android devices. It simplifies the app development process for developers by adding new capabilities. Here you don’t have to include any kind of device-specific codes which nullifies device compatibility issues altogether.

 

What is Google ML Kit?

Google ML Kit is a mobile SDK that brings the machine learning expertise of Google to iOS and Android apps. It is an easy-to-use and powerful package from Google that helps developers to come up with personalized solutions that will work smoothly across different devices.

 

What is the QR code scanning API of ML Kit?

The QR code scanning API of ML Kit lets you read encoded data using the most standard QR/barcode code formats. The data will be recognized and parsed automatically by the ML Kit when a user scans the code letting your app respond quickly and smartly.

Let’s create a QR code scanning project

To create a project…

1. Go to Android Studio. Select New Project under File with an Empty Screen template.

2. Now open the AndroidManifest.xml file to add camera permission & camera hardware permission. Here add the below-mentioned code into the manifest tag-

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" /> 

3. Open the app/build.gradle file and add a dependency for CameraX & QR code scan by mentioning the below set of codes.

//For barcode scanner(QR Code Scan)
    implementation 'com.google.mlkit:barcode-scanning:17.1.0'
    //For CameraX
    implementation("androidx.camera:camera-core:1.2.2")
    implementation("androidx.camera:camera-camera2:1.2.2")
    implementation("androidx.camera:camera-lifecycle:1.2.2")
    implementation("androidx.camera:camera-view:1.2.2")
  To enable databinding, set dataBinding to true for build features within the Android tag as mentioned below:
	buildFeatures {
        dataBinding = true
    }

4. Now add PreviewView in the main activity layout (activity_main.xml).

PreviewView is the custom View that displays the camera feed for the Preview use case of CameraX.

<?xml version="1.0" encoding="utf-8"?>
	<androidx.coordinatorlayout.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android"
	    xmlns:app="http://schemas.android.com/apk/res-auto"
	    xmlns:tools="http://schemas.android.com/tools"
	    android:layout_width="match_parent"
	    android:layout_height="match_parent"
	    android:background="#000000"
	    tools:context=".QrScannerActivity">
	    <androidx.cardview.widget.CardView
	        android:layout_width="300dp"
	        android:layout_height="300dp"
	        android:layout_gravity="center"
	        app:cardCornerRadius="20dp">
	        <RelativeLayout
	            android:layout_width="match_parent"
	            android:layout_height="match_parent">
	            <androidx.camera.view.PreviewView
	                android:id="@+id/preview"
	                android:layout_width="300dp"
	                android:layout_height="300dp"
	                android:layout_centerInParent="true"
	                android:layout_centerHorizontal="true" />
	            <androidx.appcompat.widget.AppCompatImageView
	                android:layout_width="300dp"
	                android:layout_height="300dp"
	                android:layout_centerInParent="true"
	                android:layout_centerHorizontal="true"
	                android:background="@drawable/background_image" />
	        </RelativeLayout>
	    </androidx.cardview.widget.CardView>
	</androidx.coordinatorlayout.widget.CoordinatorLayout>

5. The next step is to check camera permission to use cameraX for QR Code Scan is available or not. If it is not granted, we must request it in our codes.

class MainActivity : AppCompatActivity() {
		private lateinit var binding: ActivityMainBinding
	    override fun onCreate(savedInstanceState: Bundle?) {
	        super.onCreate(savedInstanceState)
	        binding = ActivityQrScannerBinding.inflate(layoutInflater)
        	setContentView(binding.root)
	        if (isCameraPermissionGranted()) {
	            // startCamera
	        } else {
	            ActivityCompat.requestPermissions(
	                this,
	                arrayOf(Manifest.permission.CAMERA),
	                PERMISSION_CAMERA_REQUEST
	            )
	        }
	    }
	    override fun onRequestPermissionsResult(
	        requestCode: Int,
	        permissions: Array<String>,
	        grantResults: IntArray
	    ) {
	        if (requestCode == PERMISSION_CAMERA_REQUEST) {
	            if (isCameraPermissionGranted()) {
	                // start camera
	            } else {
	                Log.e(TAG, "no camera permission")
	            }
	        }
	        super.onRequestPermissionsResult(requestCode, permissions, grantResults)
	    }
	    private fun isCameraPermissionGranted(): Boolean {
	        return ContextCompat.checkSelfPermission(
	            baseContext,
	            Manifest.permission.CAMERA
	        ) == PackageManager.PERMISSION_GRANTED
	    }
	    companion object {
	        private val TAG = MainActivity::class.java.simpleName
	        private const val PERMISSION_CAMERA_REQUEST = 1
	    }
	} 

[ExecutorService: The ExecutorService helps in maintaining a pool of threads and assigns them tasks. It also provides the facility to queue up tasks until there is a free thread available if the number of tasks is more than the threads available.]

 

6. Now is the time to implement camera Preview use case.

You need to define a configuration to use a Preview and it is used to create an instance of the use case. You can bind the CameraX lifecycle with the resulting instance once it is created.


ProcessCameraProvider is a singleton which is used to bind the lifecycle of cameras to the lifecycle owner. This way CameraX remains aware of the lifecycle of camera, allowing you to be stress-free about its opening and closing.


Add a Runnable to get cameraProviderLiveData value from cameraProviderFuture. Also, declare camera executor to manage thread.

cameraExecutor = Executors.newSingleThreadExecutor()
	cameraProviderFuture = ProcessCameraProvider.getInstance(this)
	cameraProviderFuture?.addListener({
	            try {
	                val processCameraProvider = cameraProviderFuture?.get()
	                //bind camera view here
	            } catch (e: ExecutionException) {
	                e.printStackTrace()
	            } catch (e: InterruptedException) {
	                e.printStackTrace()
	            }
	        }, ContextCompat.getMainExecutor(this))

7. First, bind view with CameraX. After that, bind your cameraSelector and preview object to the processcameraProvider.

[ImageCapture is designed for basic picture capturing. It provides takePicture() function which captures a picture, saves it to memory or a file, and provides image metadata. Pictures are taken in automatic mode once focus is converged.]

[Detecting Barcode: We’ve used ImageAnalysis feature to implement it. It allows us to define a custom class which will implement the ImageAnalysis.Analyzer interface and in turn will be used to call the camera frames that come in.]

val preview = Preview.Builder().build()
	        val cameraSelector =
	            CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
	        preview.setSurfaceProvider(binding.preview.surfaceProvider)
	        val imageCapture = ImageCapture.Builder().build()
	        val imageAnalysis = ImageAnalysis.Builder().setTargetResolution(Size(1280, 720))
	            .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).build()
	        imageAnalysis.setAnalyzer(cameraExecutor!!, analyzer!!)
	        processCameraProvider?.unbindAll()
	        processCameraProvider?.bindToLifecycle(
	            this,
	            cameraSelector,
	            preview,
	            imageCapture,
	            imageAnalysis
	        )

8. The next step is to create custom class for image analysis & get incoming camera frames. Now here we don’t have to worry about managing the camera session state or even disposing of images. Just like with other lifecycle-aware components, binding to our app’s desired lifecycle is enough.

class MyImageAnalyzer(private val activity: Activity) : ImageAnalysis.Analyzer {
	        override fun analyze(image: ImageProxy) {
	         	//we can analysis images here
	        }
	    } 

9. We process incoming frames on ImageProxy and get images from it. We then detect barcode with ML barcode scanner.

To detect barcode, we need to create InputImage from Image. Then, pass the InputImage object to the BarcodeScanner’s process method as explained below:

@SuppressLint("UnsafeOptInUsageError")
	        private fun scanBarCode(image: ImageProxy) {
	            val image1 = image.image
	            if (image1 != null) {
	                val inputImage = InputImage.fromMediaImage(image1, image.imageInfo.rotationDegrees)
	                val barcodeScannerOptions = BarcodeScannerOptions.Builder()
	                    .setBarcodeFormats(
	                        Barcode.FORMAT_QR_CODE,
	                        Barcode.FORMAT_AZTEC
	                    )
	                    .build()
	                val scanner = BarcodeScanning.getClient(barcodeScannerOptions)
	                scanner.process(inputImage)
	                    .addOnSuccessListener { barcodes ->
	                        // Task completed successfully
	                        // ...
	                        readerBarcodeData(barcodes)
	                    }
	                    .addOnFailureListener {
	                        // Task failed with an exception
	                        // ...
	                    }.addOnCompleteListener {
	                        image.close()
	                    }
	            }
	        }
	        private fun readerBarcodeData(barcodes: List<Barcode>) {
	            for (barcode in barcodes) {
	                Log.e(
	                    "barcode recognize", "QR Code: " + barcode.displayValue
	                ) //Returns barcode value in a user-friendly format.
	                Log.e(
	                    "barcode recognize", "Raw Value: " + barcode.rawValue
	                ) //Returns barcode value as it was encoded in the barcode.
	                Log.e(
	                    "barcode recognize", "Code Type: " + barcode.valueType
	                ) //This will tell you the type of your barcode
	                Toast.makeText(activity, barcode.displayValue, Toast.LENGTH_SHORT).show()
	            }
	        } 

That’s it!
Now this should allow you to scan QR code using the camera on your Android device. You should be able to capture the QR code, scan it and read the information fed into it.

 

Here is the complete code for creating QR Code Scanner with Google ML Kit and CameraX:

class MainActivity : AppCompatActivity() {
	    private lateinit var binding: ActivityMainBinding
	    private var cameraProviderFuture: ListenableFuture<ProcessCameraProvider>? = null
	    private var cameraExecutor: ExecutorService? = null
	    private var analyzer: MyImageAnalyzer? = null
	    override fun onCreate(savedInstanceState: Bundle?) {
	        super.onCreate(savedInstanceState)
	        binding = ActivityMainBinding.inflate(layoutInflater)
	        setContentView(binding.root)
	        this.window.setFlags(1024, 1024)
	        if (isCameraPermissionGranted()) {
	            // startCamera
	            startCamera()
	        } else {
	            ActivityCompat.requestPermissions(
	                this,
	                arrayOf(Manifest.permission.CAMERA),
	                PERMISSION_CAMERA_REQUEST
	            )
	        }
	    }
	    private fun startCamera() {
	        cameraExecutor = Executors.newSingleThreadExecutor()
	        cameraProviderFuture = ProcessCameraProvider.getInstance(this)
	        analyzer = MyImageAnalyzer(this)
	        cameraProviderFuture?.addListener({
	            try {
	                val processCameraProvider = cameraProviderFuture?.get()
	                bindPreview(processCameraProvider)
	            } catch (e: ExecutionException) {
	                e.printStackTrace()
	            } catch (e: InterruptedException) {
	                e.printStackTrace()
	            }
	        }, ContextCompat.getMainExecutor(this))
	    }
	    private fun bindPreview(processCameraProvider: ProcessCameraProvider?) {
	        val preview = Preview.Builder().build()
	        val cameraSelector =
	            CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
	        preview.setSurfaceProvider(binding.preview.surfaceProvider)
	        val imageCapture = ImageCapture.Builder().build()
	        val imageAnalysis = ImageAnalysis.Builder().setTargetResolution(Size(1280, 720))
	            .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).build()
	        imageAnalysis.setAnalyzer(cameraExecutor!!, analyzer!!)
	        processCameraProvider?.unbindAll()
	        processCameraProvider?.bindToLifecycle(
	            this,
	            cameraSelector,
	            preview,
	            imageCapture,
	            imageAnalysis
	        )
	    }
	    class MyImageAnalyzer(private val activity: Activity) : ImageAnalysis.Analyzer {
	        override fun analyze(image: ImageProxy) {
	            scanBarCode(image)
	        }
	        @SuppressLint("UnsafeOptInUsageError")
	        private fun scanBarCode(image: ImageProxy) {
	            val image1 = image.image
	            if (image1 != null) {
	                val inputImage = InputImage.fromMediaImage(image1, image.imageInfo.rotationDegrees)
	                val barcodeScannerOptions = BarcodeScannerOptions.Builder()
	                    .setBarcodeFormats(
	                        Barcode.FORMAT_QR_CODE,
	                        Barcode.FORMAT_AZTEC
	                    )
	                    .build()
	                val scanner = BarcodeScanning.getClient(barcodeScannerOptions)
	                scanner.process(inputImage)
	                    .addOnSuccessListener { barcodes ->
	                        // Task completed successfully
	                        // ...
	                        readerBarcodeData(barcodes)
	                    }
	                    .addOnFailureListener {
	                        // Task failed with an exception
	                        // ...
	                    }.addOnCompleteListener {
	                        image.close()
	                    }
	            }
	        }
	        private fun readerBarcodeData(barcodes: List<Barcode>) {
	            for (barcode in barcodes) {
	                Log.e(
	                    "barcode recognize", "QR Code: " + barcode.displayValue
	                ) //Returns barcode value in a user-friendly format.
	                Log.e(
	                    "barcode recognize", "Raw Value: " + barcode.rawValue
	                ) //Returns barcode value as it was encoded in the barcode.
	                Log.e(
	                    "barcode recognize", "Code Type: " + barcode.valueType
	                ) //This will tell you the type of your barcode
	                Toast.makeText(activity, barcode.displayValue, Toast.LENGTH_SHORT).show()
	            }
	        }
	    }
	    override fun onRequestPermissionsResult(
	        requestCode: Int,
	        permissions: Array<String>,
	        grantResults: IntArray
	    ) {
	        if (requestCode == PERMISSION_CAMERA_REQUEST) {
	            if (isCameraPermissionGranted()) {
	                // start camera
	            } else {
	                Log.e(TAG, "no camera permission")
	            }
	        }
	        super.onRequestPermissionsResult(requestCode, permissions, grantResults)
	    }
	    private fun isCameraPermissionGranted(): Boolean {
	        return ContextCompat.checkSelfPermission(
	            baseContext,
	            Manifest.permission.CAMERA
	        ) == PackageManager.PERMISSION_GRANTED
	    }
	    companion object {
	        private val TAG = MainActivity::class.java.simpleName
	        private const val PERMISSION_CAMERA_REQUEST = 1
	    }
	}