문제 설명
opencv를 사용하여 이미지에서 파란색을 감지하려고 시도하고 예기치 않은 결과를 얻습니다. (Trying to detect blue color from image using opencv, and getting unexpected result)
저는 OpenCV4Android를 처음 사용합니다. 다음은 이미지에서 파란색 얼룩을 감지하기 위해 작성한 코드입니다. 다음 이미지 중 이미지 1은 제 노트북에 있었습니다. 응용 프로그램을 실행하고 OpenCV 카메라로 캡처한 프레임은 이미지 2입니다. 코드를 보고 나머지 이미지가 무엇인지 확인할 수 있습니다. (코드에서 볼 수 있듯이 모든 이미지는 SD 카드에 저장됩니다.)
다음과 같은 질문이 있습니다.
카메라에 캡처된 rgba 프레임에서 하늘색 얼룩의 색상이 밝은 노란색으로 판명된 이유는 무엇입니까(이미지 2 참조).
가장 큰 파란색 얼룩 주위에
boundingRect
를 만들었지만rgbaFrame.submat(detectedBlobRoi)<를 수행하여
ROI
를 생성했습니다. /코드>. 그러나 마지막 이미지에서 볼 수 있습니다. 단지 몇 개의 회색 픽셀처럼 보입니다. 파란색 구가 이미지의 나머지 부분과 분리되어 있을 것으로 예상했습니다.
내가 놓치거나 잘못하고 있는 것은 무엇인가요? strong>
코드:
private void detectColoredBlob () {
Highgui.imwrite("/mnt/sdcard/DCIM/rgbaFrame.jpg", rgbaFrame);//check
Mat hsvImage = new Mat();
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);
Highgui.imwrite("/mnt/sdcard/DCIM/hsvImage.jpg", hsvImage);//check
Mat maskedImage = new Mat();
Scalar lowerThreshold = new Scalar(170, 0, 0);
Scalar upperThreshold = new Scalar(270, 255, 255);
Core.inRange(hsvImage, lowerThreshold, upperThreshold, maskedImage);
Highgui.imwrite("/mnt/sdcard/DCIM/maskedImage.jpg", maskedImage);//check
Mat dilatedMat= new Mat();
Imgproc.dilate(maskedImage, dilatedMat, new Mat() );
Highgui.imwrite("/mnt/sdcard/DCIM/dilatedMat.jpg", dilatedMat);//check
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(dilatedMat, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
//Use only the largest contour. Other contours (any other possible blobs of this color range) will be ignored.
MatOfPoint largestContour = contours.get(0);
double largestContourArea = Imgproc.contourArea(largestContour);
for ( int i=1; i<contours.size(); ++i) {//NB Notice the prefix increment.
MatOfPoint currentContour = contours.get(0);
double currentContourArea = Imgproc.contourArea(currentContour);
if (currentContourArea > largestContourArea) {
largestContourArea = currentContourArea;
largestContour = currentContour;
}
}
Rect detectedBlobRoi = Imgproc.boundingRect(largestContour);
Mat detectedBlobRgba = rgbaFrame.submat(detectedBlobRoi);
Highgui.imwrite("/mnt/sdcard/DCIM/detectedBlobRgba.jpg", detectedBlobRgba);//check
}
- 컴퓨터에 있는 원본 이미지 노트북 화면.
- rgbaFrame.jpg
int xOffset = (openCvCameraBridge.getWidth() ‑ cols) / 2;
int yOffset = (openCvCameraBridge.getHeight() ‑ rows) / 2;
int x = (int) motionEvent.getX() ‑ xOffset;
int y = (int) motionEvent.getY() ‑ yOffset;
Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");//check
if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) { return false; }
Rect touchedRect = new Rect();
touchedRect.x = (x > 4) ? x ‑ 4 : 0;
touchedRect.y = (y > 4) ? y ‑ 4 : 0;
touchedRect.width = (x + 4 < cols) ? x + 4 ‑ touchedRect.x : cols ‑ touchedRect.x;
touchedRect.height = (y + 4 < rows) ? y + 4 ‑ touchedRect.y : rows ‑ touchedRect.y;
Mat touchedRegionRgba = rgbaFrame.submat(touchedRect);
Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);
double[] channelsDoubleArray = touchedRegionHsv.get(0, 0);//**********
float[] channelsFloatArrayScaled = new float[3];
for (int i = 0; i < channelsDoubleArray.length; i++) {
if (i == 0) {
channelsFloatArrayScaled[i] = ((float) channelsDoubleArray[i]) * 2;// TODO Wrap an ArrayIndexOutOfBoundsException wrapper
} else if (i == 1 || i == 2) {
channelsFloatArrayScaled[i] = ((float) channelsDoubleArray[i]) / 255;// TODO Wrap an ArrayIndexOutOfBoundsException wrapper
}
}
int androidColor = Color.HSVToColor(channelsFloatArrayScaled);
view.setBackgroundColor(androidColor);
textView.setText("Hue : " + channelsDoubleArray[0] + "\nSaturation : " + channelsDoubleArray[1] + "\nValue : "
+ channelsDoubleArray[2]);
touchedRegionHsv.release();
return false; // don't need subsequent touch events
}
참조 솔루션
방법 1:
Probably the range you are using is wrong for blue, In OpenCV the hue range is from 0‑180 and you have given it's 170‑270. Find the correct hue value for blue and use in inRange.
- http://answers.opencv.org/question/30547/need‑to‑know‑the‑hsv‑value/#30564
- http://answers.opencv.org/question/28899/correct‑hsv‑inrange‑values‑for‑red‑objects/#28901
You can refer the answer here for choosing correct hsv value.
Below is the code for segmenting red color, check it with your code, and make sure it segmenting red object.
Imgproc.cvtColor(rgbaFrame, hsv, Imgproc.COLOR_RGB2HSV,4); // Convert to hsv for color segmentation.
Core.inRange(hsv,new Scalar(0,50,40,0), new Scalar(10,255,255,0),thr);//upper red range of hue cylinder
방법 2:
There are multiple traps in converting an image to HSV color space and using HSV color space.
OpenCV uses a compressed hue range because original, hue ranges from 0 to 360 which means that the values can't fit in 1 byte (values 0 to 255) while saturation and value channels are exactly covered by 1 byte. Therefore, OpenCV uses hue values divided by 2. So the hue channel will be covered by matrix entries between 0 and 180. Regarding this, your hue range from 170 to 270 should be divided by 2 = range 65 to 135 in OpenCV.
hue tells you about the color tone, but saturation and value are still important to reduce noise, so set your threshold to some minimum saturation and value, too
very important: OpenCV uses BGR memory ordering for rendering and image saving. This means that if your image has RGB(a) ordering and you save it without color conversion, you swap R and B channels, so assumed red color will become blue etc. Unfortunately normally you can't read from the image data itself, whether it is RGB or BGR ordered, so you should try to find it out from the image source. OpenCV allows several flags to convert either from RGB(A) to HSV and/or from BGR(A) to HSV, and/or from RGB to BGR etc, so that is no problem, as long as you know which memory format your image uses. However, displaying and saving always assumes BGR ordering, so if you want to display or save the image, convert it to BGR! HSV values however will be the same, no matter whether you convert a BGR image with BGR2HSV or whether you convert a RGB image with RGB2HSV. But it will have wrong values if you convert a BGR image with RGB2HSV or a RGB image with BGR2HSV... I'm not 100% sure about Java/Python/Android APIs of openCV, but your image really looks like B and R channels are swapped or misinterpreted (but since you use RGBA2HSV conversion that's no problem for the hsv colors).
about your contour extraction, there is a tiny (copy paste?) bug in your code that everyone might observe once in a while:
MatOfPoint largestContour = contours.get(0);
double largestContourArea = Imgproc.contourArea(largestContour);
for ( int i=1; i<contours.size(); ++i) {//NB Notice the prefix increment.
// HERE you had MatOfPoint currentContour = contours.get(0); so you tested the first contour in each iteration
MatOfPoint currentContour = contours.get(i);
double currentContourArea = Imgproc.contourArea(currentContour);
if (currentContourArea > largestContourArea) {
largestContourArea = currentContourArea;
largestContour = currentContour;
}
}
so probably just this has to be changed to use i
instead of 0
in the loop
MatOfPoint currentContour = contours.get(i);
참조 문서