D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] A slice of Pi gets closer...

 

On Mon, 16 Jan 2012, Kai Hendry wrote:

On 16 January 2012 18:04, tom <tompotts@xxxxxxxxxxxxxxxxxxxxxx> wrote:
No - its not just a toy - its a small compact computer. As such you can do
just about anything with it.

Except browse and use it as a Webcam controller for example.

256 M Ram was quite common on a lot of usable low end android devices.

I think most people will go for the 128M option. Probably by the time
Rasberry PI is delivered, the Iphone 5 will have 1G of RAM.

It will be intersting to see what people use the R-Pi for. I will get one, more out of curiosuty than anything else, but I do have one little project I want to put it to use for (as long as the SDL libraries work :)

As for the webcam thing - well, I'm confused, but I suspect people are either expecting too much or are using the CPU to offload some of the webcams data processing - a USB webcam should require zero CPU other than that required to get data off the camera and onto the LAN connection. a 720p stream is not that fast compared to Ethernet.

And if you're going to do that, you might as well use one of the USB <-> Cat5 extender cables discussed here recently than waste a processor board to do nothing more than act as a media convertor.

It might not run badly written bloatware but the CPU is around 1000 times
more powerful than the old DEC machine I used to use with 20 other engineers
and 130 secretaries on 1 meg of ram. And its got a gpu too....

My thinkpad has a GPU which fails to render any 3D. :}

More likely it's lacking the drivers or application software to utilise the GPU...

If you want to tackle bloatware, don't run http://linuxmint.com/ In
fact think of shunning any GNU and Linux altogether for a BSD flavour.
;)

Remind me again, which C compiler is the default under *BSD?

;-)

The software tools we use are becoming bloated - an example is the 'grep' command - and this applies to both the BSD and GNU versions... Once upon a time to grep through files recursively, you'd couple it with another tool called find - e.g.

  find . -name \*.c -exec fgrep longVariableName {} /dev/null \;

Now, you simply:

  fgrep -R . longVariableName

The -R flag tells grep to go recursive.

So grep has been bloated by adding in code to recursively scan directories - although this is worse IMO as there's no easy way to just scan for *.c files, so it's going more work. (GNU grep has --include/--exclude, BSD doesn't IIRC)

Worse - fgrep had a new flag added which I always forget: "-l". In the olden days grep would only print the filename if it was given 2 files, hence adding /dev/null to the find command - it's minor, but it all adds up nevertheless...

Another annoyance I have is ssh... Once upon a time we had rsh - great, but no encryption and it was easy to mis-configure to be insecure. So why whinge about ssh? It's encrypted and you can't turn it off by default - a good thing many will say, until you need to use it as the back-end for rsync to copy 100's of GB of data from a slowish server to a new one, or over a Gb LAN, etc. then the overhead of the encryption really slows it down - I've seen ssh burning 50% CPU usage on a 3GHz processor when used with rsync. The data rate goes down too - struggling to reach 100Mb at times, even though the disk drives can sink/source 10x that rate. So now I'm looking at going back to rsh for some LAN based servers which need to exchange a lot of data via rsync.

And so it goes on with everything - now we need windows with rounded corners, anti-aliased, and a bit of transparency to make the rounded corners allow the background to show through. That adds 100's of lines of code which have to be executed, making things slower - until you give it a faster processor.

However, we've more or less reached processor top-speeds for now - enter the GPU(s) get the main CPU to offload the prettys to the GPU - if it can, but that needs more code to run in both the main processor and GPU - complexity goes up to a state where it's almost impossible for one person to fully comprehend all the workings of a modern PC.

Gone are the days of knowing an Apple II or BBC Micro in & out - knowing what ever peek/poke meant. Is that a good or bad thing? I'm not sure. Who knows.

Gordon

--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq