Yesterday I decided to resurrect my website and blog which I took down about five years ago when my dedicated host died. My domain has sat empty since then and I had not been sharing my wisdom with the world. So I’ve decided to bring it back, but the database was Joomla 1.6 and the new Joomla is 3.x and that is one hell of a leap in versions, clearly it wasn’t plain sailing. I imported the database elements I needed from an SQL dump that I had from my backups and they had a different Joomla table prefix which was essential because the tables had different schemas (the columns were different). The important thing was finding how I could make a transfer of the articles from the old table to the new table, I started to construct an insert statement which mapped all the new table to the old values, but I am impatient and I wondered if there was another way of doing this. Fortunately I found the upgrade SQL for 3.x: https://forum.joomla.org/viewtopic.php?t=760150#p2910019 then I used a simple “Operation” to transfer the “_content” table in phpMySQLAdmin. Then I needed to fix the “_categories” section and there were some numbering issues which needed dealing with. My work is not complete but it has filled things out!
I’ve been playing a lot with home automation, one of the interesting things that I have found are the products of Itead.cc and in particular their Sonoff home automation products. I just received one of their latest products the Sonoff LED Dimmer. Of course I don’t care about their software, I want to make it use the ESPEasy firmware, which will need a re-flash, so I stripped it down and it is time to share those pictures with the world. The next step is to make it work by attaching a TTL-USB adaptor to the header that is visible on the ESP8266 card that rises out of the main board.
I would comment that the build quality of the PCB is relatively poor, with lots of scrap solder and scratches on the bottom. The board has two channels attached for LEDs but there are also RGB points marked out on the PCB which are calling out to be tested.
For some time now I have been worried about the present batch of “Alternative Energies”, their biggest problems are to do with efficiency and their ability to deliver energy when it is needed rather than just when it is available. Great savings can be made in energy efficiency in order to reduce our need for energy but fundamentally in order to achieve a low-carbon existence we need ways to make “Alternative Energies” work for us, and by “Alternative Energies” I mean taking advantage of natural sustainable sources of energy such as wind, wave and solar power. Making best use of these sources is even more important since the German Government decided to shut down all of it’s nuclear power generation earlier than planned, because now European fuel prices have to rise dramatically because Germany will now be vastly more dependent on Fossil Fuels until they can fill the gap with viable alternatives.
Currently the way we store energy if there is an excess in the grid is to convert the excess electricity into potential or kinetic energy until it is needed again later. There are many water storage facilities in the UK which pump water up-hill to large reservoirs in a technique called “Pumped Storage Hydroelectricity“. By pumping the water up-hill when you have excess energy you can then let it come back down again and recovery the energy with hydroelectric turbines. Each time you do something like this you waste some of the energy because of energy conversion inefficiencies.
Wind energy is interesting, when the wind blows we get a fair amount of energy returned by the gigantic wind turbine. The most you can ever capture from a wind turbine is 59% of the available wind energy passing through, this is a fact of physics proved by Albert Betz in 1919. However that is the upper limit, in reality there is conversion from kinetic energy (the motion of the wind) to electrical energy and such conversions always result in a loss of efficiency in gears, dynamos and power couplings. Because this energy is available “When The Wind Blows” and at no other time there have been issues where the National Grid has had to shut down turbines because they weren’t needed and this is a great waste of their potential.
Solar energy is another area of great interest to many people and I struggle to get excited about what should be a great source of energy because everyone gets excited about Photovoltaic (PV) energy which uses chemically doped materials to directly convert sunlight into electrical current. The reason I struggle to get excited is that PV isn’t very efficient, typically high quality solar panels are about 14-17% efficient and that really isn’t very much. Also solar PV cells need various exotic chemicals in their production of which only a portion is recycled and they aren’t exactly “low carbon” in their transport around the world. Solar energy is logically only available during the hours of sunlight and again, logically, is subject to the intensity of the sun in the location.
In an “Off Grid” environment, where a home owner has no access to mains electricity from the grid, it is quite common to store energy in batteries so that the peak energy availability can be disbursed over a longer period. Not everyone has access to a source of large quantities of water and a reservoir pond (or two) to store it in. Batteries are great for our mobile phones, they store energy in chemical form for good periods of time and release it on demand. Some batteries can release their energy quickly or some can release it slowly over long periods of time. But fundamentally batteries are flawed because they depend on harsh chemical processes which break down the components over time and can result in failure of the cell. Also you can only really discharge a deep cycle battery to 70-80% before you start causing premature damage to the battery cell, thus you need to be careful with your management of supply and demand.
Some time ago I started to wonder: why don’t we store more energy as directly coupled kinetic or potential mechanical energy? Wind farms, for example, I wondered if it wouldn’t be a good idea to install giant clock springs under them (or in their stems) so that we could regulate the release of all of that good mechanical energy. Now, giant clock springs sound silly at first, but actually many companies use kinetic energy storage as a power backup medium. In computer data centres, when you have a power failure it takes time to start the local on-site diesel generators and you need something to keep all the equipment going until the generator is up to speed. Some companies use giant banks of batteries which they carefully maintain and monitor, but I have seen a few UPS failures and they get rather messy and expensive. Plus batteries can release hydrogen gas which could cause harm to operatives working in the UPS battery room. The alternative that some companies really do use is to use a motor to spin a giant “fly-wheel” on a very efficient bearing, when the power fails that mass still has a great deal of momentum, and as the motor is no longer supplying force to keep it spinning it can be used as a generator to take that kinetic energy and turn it back into electricity. There can be enough energy in the momentum of a large enough mass to keep a data centre alive until the generator is ready to take the strain. This spinning mass technique however somewhat depends on the problem that you can’t store such kinetic energy for long periods, the friction of the bearings causes momentum to be lost over time and affects efficiency but it is great for short-term non-toxic energy storage. Some buses around the world are now using spinning masses as a means of kinetic energy recovery in breaking and they can then use that energy to help move the bus away from the stop before the engine takes over again, a nice and clean “Start-Stop” technique.
This application in buses and the idea of the hydroelectric storage leads me to another angle. The disadvantage of water as an energy store is partly because it can’t be compressed, it takes up a great deal of space and the disadvantage of kinetic energy is that the spinning mass can’t spin forever. Well, what about storing energy in a static way, under compression which can be quickly released on demand. This leads us neatly to: Compressed Air Energy Storage. Now of course I don’t declare to be the first to propose such an idea, because it is already in industrial use around the world to a limited degree. But what I would like to do is highlight the concept because it deserves more attention and also because I think it might have some interesting applications as a battery replacement technology.
In an off-grid situation we could see a tank being placed in an out-building which has a store of highly compressed air, this is generated through wind, solar or other inconsistent energy supply. In addition I think that some kind of Sterling Engine arrangement could supply the mechanical work for solar energy without needing to waste energy on conversion to and from electricity just to achieve compression. What about automotive situations? Many companies are installing very expensive and potentially unreliable batteries in cars, what about compressed air tanks which could be used as a kind of compressed air transmission instead of a gearbox? Directly drive the gears with the compressed air perhaps? Just put a 600CC compressor in and regenerative breaking, should have a snappy little number!
[Note: this article has nothing to do with my employer, they are my own musings and may not represent company policy.]
Recently I have been asked many times about Digital Radio in the UK, specifically about implementing DAB. This has required me to revisit the situation and fully understand the commercial/technical issues around the deployment of digital radio. The UK Government has asked the industry to work towards a switch-off of the current FM radio services, it was originally suggested that a decision about when to switch off would be made in 2015 but in the past year the government stepped back from any fixed dates for a decision and a lesser progress review will be done in 2013 instead. Once a review has been done they will know when they can start thinking about a more formal switch-off programme for analogue services. In this post I want to talk about what DAB means to me and what I would like seen done over the long-term.
To give some rough background: In the 1980s the DAB system was designed and in 1995 it was implemented by the BBC in the UK. It uses OFDM or DQPSK modulation and with error correction has a usable payload of 1,184kbit/sec. The audio codec is MPEG 1 Layer 2, which produces acceptable audio at about 192kbps.
According to the Guardian newspaper the first 5 million DAB radios were sold prior to 2007 and the next 5 million were sold between 2007 and 2009. Thus between launch and 2009 10m DAB radios were sold. If 5m radios are over 4 years old that means that by attrition quite a few of those will have been ‘retired’.
I am told that the UK needs 60 million new digital radios to satisfy the current market, this is for cars, kitchens, portable radios, bathrooms, bedrooms, etc. This means that while a portion of the market has been addressed only between 10-20% of radios have been functionally replaced and most importantly there has been very little take up in cars. The SMMT (the trade body which represents car manufacturers) has committed to ensuring that by 2013 all cars are fitted with WorldDMB radios which can be used all over Europe, but currently it costs up-to £2000 to have a digital radio fitted at manufacture.
Reception is currently quite limited (85-90%) and the system is prone to interference from impulsive noise sources.
So, as you might guess I am really not a fan of DAB.
I would like to propose an alternative, I don’t want to kill the millions of radios that are currently out there as that would make me wildly unpopular, what I want to achieve is a next generation system that can replace DAB in the long term. I know governments aren’t prone to long-term thinking but I think there should be a replacement to DAB which we can quickly work toward but not eliminate the current system entirely. It should be a system which is compatible with some current technology and which provides a substantial benefit to both consumers and broadcasters.
Some people talk about upgrading to DAB+, which uses AAC audio coding instead of MPEG 1 Layer 2, this provides a nominal 50% advantage and doesn’t address coverage issues. The current DAB system gives six channels per 1.7MHz channel in VHF Band III. Adding DAB+ would double the number of channels, but that isn’t a big advantage compared to the losses.
My proposal is to launch a new mux with DVB-T2A as the transmission system, this is the same transmission system as is used in Freeview HD but with much smaller bandwidth of 1.7MHz, this could be broadcast either at Band III, or in UHF along with the TV services even at full 8MHz bandwidth. For the purposes of this I will assume it should be broadcast at Band III VHF, to mimic the DAB licenses of 1.7MHz. I believe the DAB system could be held as it is, or rationalised to fewer channels. If one multiplex license was given to DVB-T2A then a single multiplex could broadcast fifty six services at an equivalent to DAB’s six services! This is a dramatic improvement with the same coverage and would also improve noise immunity. With less services the coverage could be increased at no extra cost!
Using DVB-T2A would allow the use of Multiple PLPs, this is a technology which allows different services to have their own specific space/time in a broadcast stream. Each PLP may have different transmission properties such as error correction, which would allow different services to pay for different levels of protection. A commercial service could have less protection, paying less money and get less reliable coverage. Also other services could be delivered in the payload, thus a low-data rate video service could be delivered to provide mobile TV. Current TV broadcasts are not very suitable to mobile reception (such as in car) and this proposal would allow TV channels to be received on the move with high reliability. They would be MPEG4 at relatively low quality. Also, because of PLPs, when the receiver is looking at one service it can ignore the other services and it can even not decode the other services it doesn’t need. Not decoding unwanted broadcast data would reduce the power demands of the product and would improve battery life.
Using a relatively cheap T2A Bluetooth device any smartphone could receive radio and mobile TV services. These would not demand a great deal of power and would be highly portable. The same system could also deliver IP data packets for other content related (or unrelated) to the broadcast. There could be regional and local information transmitted about network changes and devices could even be location aware for easy tuning. Just set your location in the product and it would be aware of what services to offer you. Channels could have service text which defined now/next and optionally broadcast a full week guide. Programme content could have series link information which allows for the consumer to be notified when a programme is being broadcast and even record that content to local storage.
A DVB-T2A radio would contain components which are commodity to a number of countries for their HDTV system, this provides an advantage because the volume of production for the components drives the price down competitively. Currently there are very few manufacturers of DAB radio silicon for manufacturers to use, but many large companies are producing large volumes of T2 silicon for the TV market. I believe the price of DVB-T2A radios could be highly commoditised, even more so than current models and many existing Freeview HD TV models could receive services without hardware changes (software updates may be needed to recognise them).
Because of the large number of services that can be carried in a single multiplex local radio could economically be transmitted nationally, which might be popular with consumers. Local multiplexes could be transmitted in existing DAB white space until DAB is ready to be retired in a further 10-14 years.
Brand-wise it would make sense to brand this as “Freeview HD Radio”, so that consumers don’t have to worry about “Digital” confusion, consumers can see the value of the quality increase in terms of the “HD” brand and the compatibility with TV could be established quickly.
Products could be quickly developed and the standardisation process would be simple if based on existing core DVB and DTG standards. A Danish broadcaster is already running tests with T2A, thus the UK could not be alone. Finland and other Nordic markets use Band III for TV services on T2, so there is room for expansion.
I believe this is a cost-effective and economical way of delivering a high quality digital radio service, this is in contrast to the existing system which is limited and ageing quickly. Before we are stuck with a legacy of poor products I think we should introduce a system which is fit for the next two decades and keeps the UK competitive in the world. Once one country sets the standard many others will follow and the market could be commercially vibrant.
What is the migration path for DVB-T2A? Transmission technology has a thing called the Shannon Limit, which was determined to be a physical constraint of transmission capacity in 1948. This states that for any given transmission spectrum there is a fininte quantity of information that can be put through it. DVB-T2 is described as being really very, very ‘close to Shannon’. Thus as a transmission technology there is unlikely to be much that can match it within the same spectrum conditions. The error correction technique that is used is really intensive as well, previous systems where more limited by the chip processing abilities but T2 really pushed designers to implement everything but the kitchen sink. I am not saying T2 is all that can be done, but it is very, very fit for purpose.
The Fraunhoffer Institute recently started demonstrating a new audio codec, this is where I see the next advancement coming from, if more than a 30% improvement in efficiency could be shown then there might be a migration path for such products. The DVB model includes support for broadcasting software updates to receivers, with a system called DVB-SSU, while this can’t change the hardware it can be used to keep products in line with new software standards. Increasingly products are moving away from hardware media decoders towards DSPs and software codecs. It makes sense that in the future roadmap a design could be defined which allows for codecs could be upgraded (within resource limits). It is possible now to plan for a post 2020 design which implemented upgradable profiles for decoding of content. Also, as processing power increases and efficiency improves, not only codecs are being implemented in software but also the radio reception function using a “Software Defined Radio” and this *may* be standardised later but this presents a bigger risk. If you upgrade the radio receiver function incorrectly you will make the product non-functional and possibly not upgradable again.
I welcome comments, no doubt some will be harsh, bit I believe it is better to cut it off now before it gets much worse. Lets not be stuck with the digital equivalent of AM radio when the rest of the world is moving forward without a legacy.
LinkedIn discussion: http://linkd.in/pu0O2G
So, this morning I was having a retro moment and wondered about the original Xbox, the classic one from 2001 which was so popular. It was build on a pseudo Windows system with an Intel processor. Then I remembered the PS2 slim which was reintroduced long after the PS3 had taken over as Sony’s flagship and how Sega have also licensed their technology to create retro clones.
If we look at the Xbox1’s original specification:
CPU: 700MHz Pentium III Coppermine
RAM: 64MB DDR1 @ 200MHz
GPU: Custom NVidia ASIC @ 233MHz
Audio: NVidia custom Surround Processor
Storage: 8GB IDE HDD
Security: Secure BIOS
Extras: 100Mbit Ethernet, Analogue Component HD, USB1.1, and other AV connectors
So, when you compare this to the CE 3100 from Intel, which is being used by set-top box vendors to build the next generation of multimedia products, you find some interesting parallels:
CPU: +800MHz Pentium-M
RAM: Up to 3GB DDR2
GPU: Intel GMA500 (PowerVR SGX 535)
Audio: Dual core 337MHz DSP processors
Storage: Flash or SATA
Optical: DVD via SATA
Extras: GBit Ethernet, HDMI, USB2, and other AV connectors
So, Dear Microsoft, why not ‘Reload’ the old XBox classic as a new product and get some revenue from that old architecture? The CE range supports DirectX 9, so there should be legacy support for the graphics calls. I don’t know how the GMA 500 compares to the Xbox1’s custom ASIC but they are 8-9 years apart in development so they can’t be too different. If there are differences they might be resolved with a bucket of faster DDR2 RAM and the better CPU clock.
I would imagine an XBox Reloaded spec would look something like this:
SoC: Intel CE3100
RAM: 256MB of DDR2 @ 800MHz (a bucket extra useful for other things)
Storage: 8GB of Flash (shouldn’t need more, but can utilise USB 2 flash or HDD)
Optical: Slimline DVD-ROM
AV: HDMI, TOSLink, Composite
Networking: 100Mbit ethernet (GBit might increase power/cost)
The whole thing should be able to emulate the Xbox’s original design without much special assistance, just the addition of SATA support to the microkernel, modification of the security mechanism and replacement of the graphics drivers (the highest risk element). If there was any problem with this it might even be possible to use a microkernel bootloader or BIOS to emulate the IDE on SATA in legacy mode and possibly even map the GPU calls. I would put a bootloader on the box which booted a version of MeeGo Linux stored in Flash as an alternative media player tool and possible DVD player alternative function.
Thus you would have a decent media player, a TV browser and a most importantly of all: a very cool retro-games console capable of playing games like Halo, Project Gotham Racing, MotoGP and Splinter Cell. All for under £100 retail! I know you can get a new Xbox 360 for £160 but there is always a market for the retro and a lower end product. The return on investment could be good and it could reach new markets as a “computer for all” in developing markets!
There have been a number of articles about the National Health Service’s IT over the past 24 hours, most of them are about the LulzSec security breach (some of them mention how helpful LulzSec have been but most focus on the negative). But there have also been articles about the NAO report on the Ambulance Service and the Socitm report as well. The Soctim article got me to I thinking that the NHS should follow the example of nebula.nasa.gov, they are building a cloud infrastructure specifically for NASA and it’s dependencies. Then NHS departments could just bid for server time and be charged appropriately. Here is my proposal, it is probably poorly informed and politically impossible, but that has never stopped anyone writing a blog before! Read more bellow…
OK, I had this idea a wile back and I finally got round to designing the concept, I don’t know if it would fly, but I think it is quite neat. Fundamentally the principle is that people in rural areas are pretty much excluded from the e-Cash revolution on the basis that they don’t have the infrastructure. By rural I am most interested in the way in which people in small villages or remote locations interact, especially in developing countries. We don’t have a means by which we could eliminate currency in their domains, we only have solutions for client server architecture in rich urban settings. Also the proposal for NFC is being built around expensive smart-phones which also doesn’t help the poorer in society. So I designed a device which should be cheap (~$10) and which can be used without being dependent on infrastructure.
I did a PDF to illustrate the Portable Currency Device concept.
OK, any posting with religion in it is probably an unwise and dangerous thing to do but it occurred to me this morning that the computer market is much like organised religion and here I will lay out my reasons:
1) Microsoft = Christianity
Penitent religion that once dominated the social and political map of the world. Increasingly depreciating in it’s followers enthusiasm although many continue to attend the ministrations more out of habit than out of true faith. Many evangelical sects still exist, some have fractured from the core authority but they still believe in what it stands for. Some orthodox groups exist aside from the mainstream followers and still experience great attendance but without too much wider attention. Not nearly as influential as it once was and has made some serious mistakes in the past.
2) IBM (AIX or OS/2?) = Judaism
Some view them as the originator of a later much more popular group, others avoid the comparisons and associations. Still has a great many fundamental followers but that number is diminishing. Some followers only practice behind closed doors and outwardly show no signs of an allegiance. Others proudly show their support in the window at key points in the year. Well represented in the finance sector.
3) Apple OS = Islam
Often failing to recognise the origins of their group actually stems from a common route with other mainstream groups. There are a core of fundamentalists who insist their way is the only way and all other systems should come to their view or die. More moderate members of the group are satisfied with their choice in life, continue to worship with blind faith. It is the duty of followers to encourage those not following their path to join them.
4) Linux = Hinduism
A group with many deities and various ways of expressing a following. Often peaceful but occasionally a little dysfunctional, with some areas which maintain a legacy in a modern environment but functional most of the time as long as you don’t try to take it in a direction it isn’t prepared for. Having a style which occasionally mixes with other groups but to the casual observer from the outside looks intimidatingly different.
5) Embedded RTOS’es = Various native religions
Quite functional in their own environment and supporting the people with their needs. Often looks very different to the mainstream groups and can be incompatible. Smaller followings but often works well, in harmony with the environment.
6) RISC OS = Paganism
May have pre-dated origins within an unconnected population but was pretty much wiped out as travel and needs of users grew. Of little relevance in modern society but still practised by small groups. These small groups occasionally put on public displays in public spaces, to which some from other groups take offence and others look on with mixed feelings.
If you’ve ever seen full frame uncompress 625line SD with component 10-bit colour then you will know that sometimes resolution doesn’t matter. At a previous employer of mine we could show normal people pictures on a Barco Grade 1 monitor and they would swear it was HD. Freeview just has poor quality because the cost of carriage is so high, especially when there are a dozen versions of BBC One or ITV1 and they have to compress everything down to the n-th degree. The reason that regionalisation costs money is that we must have a cellular transmitter design, each region has it’s own frequency (or more than one because of relays), adjacent regions can’t use these frequencies because otherwise that would affect coverage. The UK design has many “guard” frequencies to protect adjacent transmitters in this way. If every region had the same channels they we could uses a system called an “SFN”, or Single Frequency Network, in this configuration the transmitters all transmit exactly the same thing at exactly the same time at exactly the same frequency. When transmitting in an SFN if you are between two transmitters you get the signal from both transmitters, but instead of causing a problem for you it actually helps because the two transmitters actually re-enforce each other.