Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 57 articles
Browse latest View live

How can I speed up image rectification?

$
0
0
I hope to increase the rectified frame rate that I'm getting from image_proc. I'm feeding it a 64 fps 704x456 video stream, with that 704x456 rectangle being the region of interest sampled from a 1600x1200 sensor. The camera calibration is specified for the full-sensor size, and I'm using the Diamondback version of image_proc. $ rostopic hz /camera/image_raw average rate: 64.023 $ rostopic hz /camera/image_color average rate: 64.113 $ rostopic hz /camera/image_rect_color average rate: 27.558 $ top 767 user 20 0 587m 109m 11m S *115* 0.9 132:37.80 image_proc Using earlier versions of image_proc and camera1394, I was able to get 38 fps rectified by choosing an MxN subrectangle [slightly larger than the current ROI] and calibrating with that. In that case, the Image and CameraInfo messages would both specify MxN pixels. Now, though, camera_info lists the full 1624x1224, and I suspected that something inside the rectify nodelet was copying my ROI onto a full-size image before undistorting anything. $ rostopic echo /camera/image_raw height: 456 width: 704 encoding: bayer_rggb8 is_bigendian: 0 step: 704 However, when I remove the ROI the undistorted framerate drops further: $ rostopic hz /camera/image_color average rate: 29.145 $ rostopic hz /camera/image_rect_color average rate: 19.497 so I'm unsure of the cause of the slowdown. Can anyone see what's going on?

prosilica camera configuration

$
0
0
I wonder if anybody knows a way/tool/wizard to find optimal configuration for Prosilica GC1380 camera. I use "AVT GigE SDK" and play with Exposure and Gain variables to get acceptable image at resolution 640x480, but the result fps is too low (less than 5). High fps comes with dark image. any suggestion to get image with quality and stream speed good enough for face_detector. thanks

Is it better/prefered to combine nodelets programmatically or in a launch file?

$
0
0
Is there an advantage to combining nodelets programmatically? I noticed that stereo_image_proc does this and was surprised. I'm attempting to chain image processing using nodelets to avoid the serialization overhead, and want to combine with stereo_image_proc, but was confused as to whether there was any reason I shouldn't just load up all of the constituent nodelets in a launch file.

How can additional nodelets be attached to the stereo_image_proc's nodelet manager in Diamondback?

$
0
0
I'd like to add in image processing attached to the `points2` and `disparity` output of `stereo_image_proc`, but can't figure out how to either get the `stereo_image_proc` stack nodelets to attach to my nodelet manager or attach to theirs. Attempting to load nodelets using `stereo_image_proc` fails. And the configuration done inside of `stereo_image_proc` is complex and done programmatically as work around to a topic remapping problem, and so I was unable to rebuild `stereo_image_proc`'s elements: * `image_proc/rect` * `stereo_image/disparity` * `stereo_image_proc/point_cloud2` ... into the same functionality using simply a launch file which had a nodelet manager I could attach to.

camera1394 from launch

$
0
0
When I attempt to roslaunch a camera1394 node with a camera.yaml file, I get the following error: [ERROR] [1308174837.695264371]: Exception parsing YAML camera calibration: yaml-cpp: error at line 1, column 1: key not found: image_width [ERROR] [1308174837.695298296]: Failed to parse camera calibration from file [/home/bradpowers/ros/ros_tutorials/cv_bench/params/ffmv.yaml] This indicates that the camera is uncalibrated, which it is. Unfortunately this means that I can't get color images working, despite having determined bayer patterns and the like. At the moment, I don't want to have to calibrate the camera, but would like the color images out of the camera1394 node. Is there a way to run the camera1394 node without the calibration portion? Here is my launch file: And my corresponding ffmv.yaml file: {auto_brightness: 2, auto_exposure: 2, auto_focus: 5, auto_gain: 2, auto_gamma: 0, auto_hue: 5, auto_iris: 5, auto_saturation: 5, auto_sharpness: 5, auto_shutter: 2, auto_white_balance: 3, auto_zoom: 5, bayer_method: '', bayer_pattern: rggb, binning_x: 0, binning_y: 0, brightness: 138.0, camera_info_url: '', exposure: 0.0, focus: 0.0, format7_color_coding: mono8, format7_packet_size: 0, frame_id: /camera, frame_rate: 60.0, gain: 12.041193008422852, gamma: 1.2000000000000002, guid: 00b09d0100a80b41, hue: 0.0, iris: 8.0, iso_speed: 400, reset_on_open: false, roi_height: 0, roi_width: 0, saturation: 1.0, sharpness: 1.0, shutter: 0.066243231296539307, use_ros_time: true, video_mode: 640x480_mono8, white_balance_BU: 512.0, white_balance_RV: 512.0, x_offset: 0, y_offset: 0, zoom: 0.0} Thanks!

image_proc warning /camera/camera/...

$
0
0
Hi all, I am receiving warnings from image_proc every 30 seconds or so: [ WARN] [1310053249.262781753]: The input topic '/camera/camera/image_raw' is not yet advertised [ WARN] [1310053249.262963275]: The input topic '/camera/camera/camera_info' is not yet advertised Note the doubling of 'camera'. The prosilica_camera node is publishing /camera/image_raw and /camera/camera/info, and image_proc is subscribed to them with no problem, other than the persistent warnings. I am launching the nodes with: It could be me, but I think this behavior just began recently, perhaps due to an update. Any ideas?

proliferation of dynamic_reconfigure servers due to image_proc or image_transport

$
0
0
Hi- I've noticed recently that it has become increasingly difficult to make use of the dynamic reconfigure gui because there are too many servers in the dropdown list. It seems now every image transport is adding two servers to the list, one for "compressed" and one for "theora". When the PR2 is running there is more than a whole screen's worth of dynamic reconfigure servers just for the image processing stuff. Trying to find or quickly switch between servers in the drop down menu is really annoying... I've resorted to making my servers of interest start with 'a_' so that I can find them first, but this seems hacky/unsustainable. Is there any way to set a flag that tells some of those image transport nodes that I don't want them to advertise their servers? Or to tell the gui to ignore a set of nodes? (Maybe the servers could advertise a priority, like the different ROS_INFO/ROS_WARN levels, or they could belong to groups that are easier to manage.) Ideas?

image_proc/crop_decimate initialization problems

$
0
0
Hi all I am having a lot of problems with crop_decimate nodelet. I am using a auto starting launch script which launches camera1394 -> debayer -> crop_decimate nodelets. It seems that 80% of the times, the crop_decimate nodelet will not launch properly, however if I launch it after a delay, the problem disappears. By not launching properly I mean that the crop_decimate nodelet exists, but it never publishes any images. Has anybody else seen this problem? My current hacky solution of delayed launching of the crop_decimate nodelet is not very elegant, and I would rather not do it.

rectify image in your package

$
0
0
Dear All, I have a package which has 2 image_info and camera topics: nh_.param("input_image_topic", input_image_topic_, std::string("/camera")); nh_.param("input_camera_info_topic", input_camera_info_topic_, std::string("/camera_info")); I am wondering if there is a way to rectify images in my package. I donot want to use the image_proc in a launch file. I need the rectify images to be called in my package to do some processing. cheers, Eddi

Multiple Point Grey Chameleon USB cameras unstable

$
0
0
Hello. This will be a rather fuzzy question and I apologize for that in advance. We have two Point Grey Chameleon (http://ptgrey.com/products/chameleon/chameleon_usb_camera.asp) cameras connected to our computer and we're using the camera1394 driver to publish them in ROS. We've been having instability problems with our setup which seem to be very difficult to debug and analyze. We know (now) that the USB 2.0 cameras aren't exactly the best option for robot vision and running two like the Chameleon (with a resolution of 1296x964) might be stretching the limit a little. However, the annoying thing is that we've been able to run the cameras without problems. It works often, but not nearly always. Sometimes we have "rough patches" where things behave unusually bad (tonight was one such night). But as I said, once the cameras are up, they seem to stay up as long as we don't kill the image_proc process. If, however, the image_proc process reports an error (segfaults), we seem to need to hard reboot the computer and cut off power to the cameras (they are powered through both USB and GPIO since USB power alone didn't seem to cope with two of these). So my question is basically whether anybody has experience or tips with running multiple (higher end) USB cameras through the camera1394 driver? Maybe even the Point Grey Chameleon cameras? What can we do to make this less error prone? I just found some code that seems to offer a driver for image_proc directly on top of the PGR SDK (http://www.roschina.net/ros/www.ros.org/wiki/pgr_camera_driver.html), I'll try that out tomorrow but I'd like to hear if anybody has experience with that. Kind regards, Stefan Freyr **UPDATE:** A bit more information about this. I just updated the firmware on both cameras. After that I did a little more structured testing, shutting down and cutting all power to both the computer and cameras. After booting up I tried launching both cameras and that worked. I killed that launch and tried again and got the segfault. Then I tried running each camera individually and that works fine. Then I tried running flycap (the Point Grey image viewer and configuration tool) and that works for each camera individually and (and here's the kicker) it also works to fire up two flycap instances and view both cameras simultaneously! After that I tried running both cameras from a ROS launch script but I get the segfault. I'm trying to get more information about the segfault but I can't find anything in the log files. Is there a way to make the log files more verbose for camera1394? Do I need to compile the camera1394 driver again and if so, how do I do that cleanly on a setup using the Ubuntu packages? This seems to be some sort of a pickle with the camera1394 since both cameras are working in flycap (both individually and separately) so I am considering testing the pgr_camera driver that I mentioned before, but I really would like to try to figure out what's going on with camera1394 if there is any way to do that and get it fixed.

Force raw8 for debayering

$
0
0
Hi, I have a Basler camera that outputs bayer pattern images in format7_mode1 mode. However, when I set raw8 for the format7_color_coding parameter, then it fails to be set, and mono8 is used instead. Therefore, the image_proc node cannot debayer the images, because it thinks they are simply grayscale, instead of bayer, as they actually are. Everything works in coriander (see [coriander_bayer.png](/upfiles/13413943973522718.png)). The output of the camera1394 node is [ INFO] [1341392664.941249990]: using default calibration URL [ INFO] [1341392664.941737294]: camera calibration URL: file:///home/enrique/.ros/camera_info/camera.yaml [ERROR] [1341392664.942107239]: Unable to open camera calibration file [/home/enrique/.ros/camera_info/camera.yaml] [ WARN] [1341392664.942363881]: Camera calibration file /home/enrique/.ros/camera_info/camera.yaml not found. [ INFO] [1341392665.615791969]: Found camera with GUID 305300013759c6 [ INFO] [1341392665.616241698]: camera model: Basler A102fc [ INFO] [1341392665.621517490]: Format7 unit size: (2x2), position: (2x2) [ INFO] [1341392665.621664656]: Format7 region size: (1388x1038), offset: (0, 0) [ERROR] [1341392665.624025892]: Color coding raw8 not supported by this camera [ INFO] [1341392665.636889941]: using default calibration URL [ INFO] [1341392665.637584961]: camera calibration URL: file:///home/enrique/.ros/camera_info/00305300013759c6.yaml [ERROR] [1341392665.638324312]: Unable to open camera calibration file [/home/enrique/.ros/camera_info/00305300013759c6.yaml] [ WARN] [1341392665.639039116]: Camera calibration file /home/enrique/.ros/camera_info/00305300013759c6.yaml not found. [ INFO] [1341392665.639805838]: [00305300013759c6] opened: format7_mode1, 15 fps, 400 Mb/s [ INFO] [1341392665.640524806]: camera calibration URL: package://cameras_cirs/calibration/basler_a102fc/format7_mode1/00305300013759c6.yaml The message Color coding raw8 not supported by this camera of the output, is generated in modes.cpp file. IMHO I think the best way to solve this is to allow the image_proc node to debayer mono8 images, so cameras that call mono8 to what should be raw8 are supported. How can I force to set the raw8 encoding, or just debayer the images? Thanks in advance. Below you can see the launch and (parameters/config) yaml files I'm using to run the camera driver. Enrique **basler_a102fc_format7_mode1.yaml** # Example camera parameters (for Basler A102fc) # Parameters (no lens required/used) guid: 00305300013759c6 # (defaults to first camera on bus) iso_speed: 400 # IEEE1394a video_mode: format7_mode1 # 1388x1038 @ 15fps bayer pattern frame_rate: 15 # max fps (Hz) format7_color_coding: raw8 # for bayer (we can use others, even rgb8 directly) # With raw8, we don't need to configure bayer pattern and method, actually we # cannot change them bayer_pattern: gbrg #bayer_method: HQ auto_brightness: 3 # Manual (3) brightness: 0 # Is better to increase gain than shutter speed, becuase gain produce salt&pepper # noise, while shutter speed produce motion blur, which is harder to deal with. auto_gain: 3 # Manual (3) gain: 512 # dB unavailable # See comment above wrt to gain and shutter speed setting. auto_shutter: 3 # Manual (3) shutter: 356 # time (ms) unavailable; value for sunlight auto_white_balance: 3 # Manual (3) white_balance_BU: 112 white_balance_RV: 64 frame_id: basler_a102fc camera_info_url: package://cameras_cirs/calibration/basler_a102fc/format7_mode1/${NAME}.yaml **basler_a102fc.launch**

Replacing camera_info messages

$
0
0
Is there any easy way to replace the camera_info messages stored in a bag file. In particular when using either the image_proc or stereo_image_proc nodes.

How to make image_proc from source? Missing opencv2 dependency

$
0
0
Hi, I am trying to install image_proc (non-catkin one) from source. However, its dependency opencv2 (electric version) deprecated. Does image_proc need opencv2 dependency anymore? Can I just delete opencv2 dependency from manifest.xml of image_proc? This is the output of rosdep install image_proc: root@linaro-alip:~/catkin_ws/src/image_pipeline-electric/image_proc# rosdep install image_proc ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: image_proc: Missing resource smclib ROS path [0]=/root/ros_core_ws/install/share/ros ROS path [1]=/root/catkin_ws/src ROS path [2]=/root/ros_core_ws/install/share ROS path [3]=/root/ros_core_ws/install/stacks Thanks in advance!!!

load calibration file

$
0
0
Hi there, I would like to calibrate (extrinsically) a Webcam to my Kinect. For that I need the rectified image of the webcam (usb_cam/image_rect). To get that I calibrated the camera like this [tutorial](http://ros.org/wiki/camera_calibration) and got a ost.txt. I converted this txt file to yaml and saved it. I use following commands the get the rectified image to run $ roslaunch usb_cam high.launch In another tab: $ ROS_NAMESPACE=usb_cam rosrun image_proc image_proc And then to receive the rectified image $ rosrun image_view image_view image:=/usb_cam/image_rect But when I subscribe to the rectified image I just get **`[ERROR] [1356567364.021643621]: Rectified topic '/usb_cam/image_rect' requested but camera publishing '/usb_cam/camera_info' is uncalibrated`** My launchfile looks like this: #max: 2304 #max: 1536 My calibration file like this: image_width: 1920 image_height: 1080 camera_name: usb_cam camera_matrix: rows: 3 cols: 3 data: [1367.163672, 0, 960.12046, 0, 1387.606605, 519.231255, 0, 0, 1] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [0.094896, -0.140698, -0.000646, 0.002101, 0] rectification_matrix: rows: 3 cols: 3 data: [1, 0, 0, 0, 1, 0, 0, 0, 1] projection_matrix: rows: 3 cols: 4 data: [1384.273193, 0, 963.928625, 0, 0, 1410.329346, 518.011368, 0, 0, 0, 1, 0] EDIT 1: I saw that the usb_cam package doesn't use the Camera_Info_Manager. Could that be the reason why the calibration file has no effect? I tried now to insert the calibration parameter directly in the launch file like this. But still no changes.... but still without a change. This is the output for `$ rostopic echo /usb_cam/camera_info` header: seq: 204 stamp: secs: 1356617114 nsecs: 959397246 frame_id: usb_cam height: 240 width: 320 distortion_model: '' D: [] K: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] R: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] P: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False

synchronized problem related to calibration

$
0
0
Hello All. I Am a rookie in Ros, I am using ros fuerte and a monocular camera. My lapto's camera. I want to know how many things I need to setup to make a retification image with image_proc package. I already setup a launch file, I attached But when I ran the image_view to display the image rectify, I got nothing. And when I ran the image_view to display the image mono, it works. So I want to know what things I have to setup to do a rectification image. And what thing I did wrong. I attached the rxgraph output, my launch file and my camera parameters Besides I have a warning: [image_transport] Topics '/Dava/image_mono' and '/Dava/gscam/camera_info' do not appear to be synchronized. In the last 10s: Image messages received: 80 CameraInfo messages received: 80 Synchronized pairs: 0 I will appreciate any respond . Thank you ![image description](/upfiles/1380412287617070.png) ![image description](/upfiles/13804139829509174.png) [Output.png](/upfiles/13806688904420701.png) [Test_Launch.png](/upfiles/13804128631802391.png) you can download the launch file to this link: https://mega.co.nz/#!fohxhQYY!RIuUOVuf1mL8Nn_erxkrujVt8zUmsLn-4_kP2YokMxc

stereo_image_proc disparity image crop

$
0
0
Hi, I am using stereo_image_proc to generate a disparity image. However, I always see a fixed number of columns cropped of on the left side. Since this is fixed, the cropped region is almost half the disparity image in a 160x120 image i.e. almost 80 columns are blank. I dint notice this problem till I went below 640x480 resolution. For a 80x60 image, this becomes even worse. Only a few column pixels are visible. Has anyone experienced the same problem? Is there anyway to rectify this? I tried the same with elas and there is no cropping.

Image_proc package and yuv422 format

$
0
0
I need an example to run image_proc ang get a topic that let my use and image in YUV422 format. The goal is not convert every time the topic subcriber image Thanks in advanced

Why is image_proc only show the upper left half of my image?

$
0
0
Hello,
I am currently publishing an image which is 1024x1024 pixels and I am trying to rectify it using image_proc. The output of my /camera/camera_info is:
**header:
seq: 3692
stamp:
secs: 1405337388
nsecs: 154563950
frame_id: camera
height: 1024
width: 1024
distortion_model: plumb_bob
D: [-0.183149270183746, 0.139143255380374, 0.00225718725189143, 0.00131346587970189, 0.0]
K: [1189.58587069262, 0.0, 501.625685647322, 0.0, 1188.97100074168, 550.474655398214, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [1151.62145996094, 0.0, 501.542604357335, 0.0, 0.0, 1150.24182128906, 553.703494449055, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 2
binning_y: 2
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: True
**
While the header of the output of /camera/image_raw is:
**header:
seq: 7082
stamp:
secs: 1405337872
nsecs: 724561256
frame_id: camera
height: 1024
width: 1024
encoding: mono8
is_bigendian: 0
step: 1024
** So the problem is that, when I run **$ROS_NAMESPACE=camera image_proc image_proc** I get images to be published at /camera/image_rect that have the following header:
**
header:
seq: 20
stamp:
secs: 1405338132
nsecs: 447977005
frame_id: camera
height: 512
width: 512
encoding: mono8
is_bigendian: 0
step: 512
** I can see with image_view that the image in the rectified version is only the 512x512 upper left of the original 1024x1024 image_raw Does anyone know why image_proc is only grabbing only the first upper 512x512 pixels of my image? It seems to me to be some kind of ROI problem, but I cannot see where... Thanks in advance, Pedro

stereo image proc bypass

$
0
0
Ubuntu 14.04 ROS Indigo Igloo Bumblebee2 Hello! I am using a lookup table to rectify the images extracted with [camera1394stereo](http://wiki.ros.org/camera1394stereo) generated outside ROS with triclops. Now I have the "perfect" rectification but I want to bypass the stereo image proc to avoid the process of rectification, normalization and rotation. I just want to generate the disparity image and the PointCloud2 image to use it in [viso2_ros](http://wiki.ros.org/viso2_ros). The problem is that I don't have the correct parameters to fill the camera_info msg left and right cameras. As long I understand, viso2_ros use image_proc to do all the process to generate rectified images. Is there some way to use image_proc, stereo_image_proc or depth_image_proc to get the disparity image and the pointCloud without do the rectification? Thanks in advance.

Implementing RegionOfInterest and do_rectify in camera driver for use with image_proc?

$
0
0
I'd like to implement support for RegionOfInterest for my camera driver but I'm not sure how to do it correctly. I'd like image_proc to be able to subscribe to the roi image and operate correctly, though it's not clear what correct operation would look like. There is some helpful description at http://www.ros.org/reps/rep-0104.html#guidelines-for-camera-drivers 'Raw and Rectified ROI' section but it isn't always clear where responsibility lies for implementing a feature, and what numbers should be published in CameraInfo for use by image_proc. If do_rectify is true, does that mean the associated roi is in rectified coordinates? Should I calculate a much bigger raw roi to send to the camera using the bounding box of the distorted desired roi boundary to make sure there are no black pixels when the undistort happens later, but then publish the original desired roi on the camera_info topic for use by image_proc which will cause image_proc to select a window within the undistorted image that has no border? But then how would image_proc know where to place the distorted image in the distorted coordinate frame? It seems like there isn't enough information present (unless image_proc is running it's own distort/undistort to get a mapping). Right now I'm trying to set the camera_info roi to be the distorted roi sent to the camera, but for some reason I have to divide the camera_info roi x and y offset by 2 otherwise the image will get increasingly offscreen for larger x and y offset (could be a bug on my part), and I'm still not eliminating the undistort border- that could also be me improperly generating the distorted roi, but I still suspect I'm publishing the wrong thing to camera_info. The way I like to imagine the process is that image_proc would create a large black image the size of the sensor width and height, place the subwindow image from the camera into it using distorted coordinates, then perform an undistort on the whole image, then window out a new roi image where the user originally requested and publish that. But again it seems like two sets of xy offsets have to change hands from camera driver to image_proc to accomplish that- or is the responsibility of some other node downstream to compute the window within the image_proc generated rectified image to match the desires of the original roi request with do_rectify set to true? Should I be able to toggle do_rectify on and off and see the same parts of the image that are visible stay put, while the distortion border moves around?
Viewing all 57 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>