Wait, where did my background go?
The Human-Body feature guide of Huawei ML Kit’s Static Image Segmentation
Hello from home as it should be, Earth people. I’m your guide, Yekta. In this article, we’re going to see another wonder of Huawei’s Machine Learning Kit. And that would be another popular feature called Image Segmentation. If you have dealt with Machine Learning before, this term would be familiar to you. It is a technique that identifies different elements in the image or camera stream. It is good to know that Huawei’s Image Segmentation supports both. So we can build and develop various forms of different use cases for our projects.
There is a lot to cover in Image Segmentation. And this little guide would only focus on Static Image Segmentation. This way, the article would be concise. And you could read what you care about at the moment instead of scrolling the section that you want.
## Image Segmentation
The image segmentation service segments the same elements (such as the human body, plant, and sky) from an image. The elements supported include the human body, sky, plant, food, cat, dog, flower, water, sand, building, mountain, and others. This service supports the segmentation of static images and dynamic camera streams and provides the human body and multiclass segmentation capabilities. Fine segmentation is supported. The mean intersection over union (MIoU), an indicator for measuring the image segmentation precision, is better than the industry level.
That was Huawei’s official document explanation of Image Segmentation. But in this article, we’re going to focus on the Human-Body model package.
## What is the Human-Body model package?
As its name hints, it is a segmentation model specifically for the human body. There is also another model package called Multiclass. As you would guess, Multiclass model package is every other element except the human body such as sky, plant, food, cat, dog, flower, water, sand, building, mountain, and others. These two specific model packages give us the option to implement what we need. If you think about mobile applications, it is more likely to use human body segmentation rather than other segmentation types. So, it makes much sense to modularize the most used segmentation models.
## Development
Static Image Segmentation sample demonstrates the usage via choosing an image from user gallery then analyze it and finally showing the final output along with image comparison view that shows the original image and the output image with a slider.
Before delving into the topic, you can see the static image segmentation preview below. As you may have noticed, it only detects human body elements in the image. That’s why Mando and Baby Yoda did not extract from the image like Luke in other words Image Segmentation mechanism did not seem them as the human body. Wow, that sounds like a racist 😄. But this was the expected output and shows how accurate Huawei’s Image Segmentation works by the given configuration which is Human-Body in this case.
## Static Image Segmentation Preview
First things first, we initialize the analyzer with the desired settings. In our case, just using like below would create an analyzer object that uses precise segmentation and human body segmentation by default.
After the initiation phase, we should convert our Bitmap to MLFrame. Luckily, MLFrame has a helper function for that. So, just using MLFrame.fromBitmap(Bitmap)
function is enough for us to convert Bitmap to MLFrame.
Then, we call the analyze(…)
method to initiate analyzing the image. First, we assign asyncAnalyseFrame(MLFrame)
method to a variable for observing its states in an asynchronous fashion. Task is a generic observable state class for the result of the analyzing process. But you could use the synchronous analyseFrame(MLFrame)
method to fit into your use-case as well. MLImageSegmentation is nothing but a data class which it provides foreground, grayscale, and original Bitmaps. And plus, masks variable which is Byte. But I will not mention it to remain within the article’s scope. Then, we register our code to 2 states, which are on success and on failure. As you would guess, addOnSuccessListener(…)
returns MLImageSegmentation. Lastly, inside of configureAfterImage(Bitmap)
is just setting the after image to compare with the original one.
I suggest you to use debug mode when developing your use case because Android Studio’s debug pane offers rich capabilities such as displaying bitmaps. That feature is a convenient way to see the outputs of MLImageSegmentation.
Last but not least, don’t forget to release your resources.
And that’s it.
## Test
⚠️ Each HMS Integration requires the same initial steps to begin with. You could use this link to prepare your app before implementing features into it. Please don’t skip this part. This is a mandatory phase. HMS Kits will not work as they should without it.
After reading it, you should do one or two things to run the app. First, enable ML Kit under the Manage APIs tab on AppGallery Connect and should see the image below after enabling it.
Then, download the agconnect-services.json
file that is generated and place it under the app
directory.
## Github Repository
That is it for this article. You could search any question that comes to your mind via Huawei Developer Forum. And lastly, you can find lengthy detailed videos on Huawei Developers YouTube channel. These resources diversify learning channels and make things easy to pick and learn from a huge knowledge pool. In short, there is something for everybody here 😄. Please comment if you’ve any questions on your mind. Stay tuned for more HMS Development resources. Thanks for reading. Be safe, folks.