Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.7k views
in Technique[技术] by (71.8m points)

android - CameraX Image analysis's imageproxy size and PreviewView size are not the same

I'm trying to use Firebase's MLKit for face detection with Camerax. I'm having a hard time to get Image analysis's imageproxy size to match PreviewView's size. For both Image analysis and PreviewView, I've set setTargetResolution() to PreviewView width and height. However when I check the size of the Imageproxy in the analyzer, it's giving me 1920 as width and 1080 as height. My PreviewView is 1080 for width and 2042 for height. When I swap the width and the height in setTargetResolution() for Image analysis, I get 1088 for both width and height in imageproxy. My previewview is also locked to portrait mode.

Ultimately, I need to feed the raw imageproxy data and the face point data into an AR code. So scaling up just the graphics overlay that draws the face points will not work for me.

Q: If there are no way to fix this within the camerax libraries, How to scale the imageproxy that returns from the analyzer to match the previewview?

I'm using Java and the latest Camerax libs:

def camerax_version = "1.0.0-beta08"

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

It's quite difficult to ensure both the preview and image analysis use cases have the same output resolution, since different devices support different resolutions, and image analysis has a hard limit on the max resolution of its output (as mentioned in the documentation).

To make the conversion easier between coordinates from the image analysis frames and the UI/PreviewView, you can set both preview and ImageAnalysis to use the same aspect ratio, for instance AspectRatio.RATIO_4_3, as well as PreviewView (by wrapping it inside a ConstraintLayout for example, and setting a constraint on its width/height ratio). With this, mapping coordinates of detected faces from the analyzer to the UI becomes more straight-forward, you can take a look at it in this sample.

Alternatively, you could use CameraX's ViewPort API which -I believe- is still experimental. It allows defining a field of view for a group of use cases, resulting in their outputs matching and having WYSIWYG. You can find an example of its usage here. For your case, you'd write something like this.

Preview preview = ...
preview.setSurfaceProvider(previewView.getSurfaceProvider());

ImageAnalysis imageAnalysis = ...
imageAnalysis.setAnalyzer(...);

ViewPort viewPort = preview.getViewPort();
UseCaseGroup useCaseGroup = new UseCaseGroup.Builder()
                .setViewPort(viewPort)
                .addUseCase(preview)
                .addUseCase(imageAnalysis)
                .build();

cameraProvider.bindToLifecycle(
                lifecycleOwner,
                cameraSelector,
                usecaseGroup);

In this scenario, every ImageProxy your analyzer receives will contain a crop rect that matches what PreviewView displays. So you just need to crop your image, then pass it to the face detector.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...