View Single Post
nwerneck's Avatar
Posts: 304 | Thanked: 233 times | Joined on Jul 2009 @ São Paulo, SP, Brasil
#31
Originally Posted by thorbo View Post
Does anyone know if it is possible to capture a "raw" image, or perhaps a .tif? Currently I believe the output is only .jpg, and they are processed. I could see times where it would be nice to capture all the details the imager would allow and post process somewhere else, if desired.
When I was capturing images to process with a temporal median filter I used mplayer to capture a yuv stream. I am not sure if it delivers actual raw data or not, but it seemed nice to me. (i.e., I didn't test to make sure it wasn't JPEG-compressing the image and then delivering me decompressed yuv frames) You can also try to use gstreamer... And both can also save in lossless like png, without jpeg noise a priori.

http://talk.maemo.org/showthread.php?p=357615

***

I am quite excited about doing AR and other computer vision and (image processing in general) applications with NITs, but I became a little suspicious lately that we don't have the tools to take full advantage of the processor. How do I create an application using OpenCV, OpenMAX or any other libraries, making sure I am exploring DSP stuff? (I'm not sure N8x0 offers much, but N900 should have that NEON thing, right?)

For example, I am implementing this algorithm called MonoSLAM. One of the tasks I need to perform is to match features in sub-regions of images captured from the camera. I'm not sure if the processor would bear it, but my first try would be to match using normalized cross correlation, and for that I need to calculate DFTs of image patches, and calculating image integrals would be nice also. Are there libraries available to do this kind of low-level image processing to me? Can I even implement them by myself somehow? Can I use GCC, ou do I need some sort of proprietary compiler?

I remember some time ago people were talking about making an OGG decoding library that made use of the DSP resources. A Nokia developer, if I am not mistaken, said it probably wouldn't pay off. How come??

I want to see a "demo" of the OMAP DSP... I want to run two programs, one compiled with and other without the "magical library" that make signal-processing tasks faster. And if it's not much better, I want a different processor, without useless DSP things that we can't/don't need to explore. There are few things I hate more than a processor with unused instructions.

I am very concerned about all this because only the other day I discovered, after a long time working with OpenCV just for prototypes, that you need some proprietary Intel library to have a really neat OpenCV implementation. No SIMD for the "free" crowd. I think that's kinda lame... Now I'm trying to figure out how things are happening with OMAP. Is it the same? To make full use of SIMD I need some proprietary compiler or library? If that is the case, I'll end up going to program for my Chinese Z80 music player. At least I get the feeling I am using all the hardware available...

Sorry for the long rant, I am not sure I should start another thread since some many concerned people are around here anyway.

I am not hijacking the thread. I am just asking: what are the tools we should be using to make the best image processing applications possible for the NITs? How should i implement my feature matching algorithm? How should I implement my own demosaicing algorithm (supposing we can get the really-raw data from the camera)? I want to make an audio synthesizer too... How do I make it as efficient as possible?
 

The Following User Says Thank You to nwerneck For This Useful Post: