mark :: blog

<< prev [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ] next >>


I use a Dallas 1-wire network for temperature sensing and control around the house. Ideally you run a single cable from the PC interface and attach sensors along the wire, but I didn't think of that before the house was wired. So instead I use a star network using the existing cat5 networking to each room, with a one-wire hub from AAG handling switching between the different spokes of the hub.

At the Home Automation server is a USB to 1-wire interface unit. This interface connects directly to the one-wire hub. 6 outputs from the hub go to various places around the house.

Output C6 has a ds18s20 temperature sensor plugged straight into it, this provides the temperature of the home automation cupboard.

Output C5 goes to the lounge, which has a single ds18s20 temperature sensor plugged straight into it, this provides the temperature of the lounge which is used as the master control for the heating system.

Output C2 goes to the master bedroom, again with a single ds18s20 temperature sensor.

Output C1 goes to the garage, where a temperature sensor is connected and poked out of the vent to get the outdoor temperature, and on the same wire is a 1-wire switch used to control the heating system.

The final output C4 goes to a hacked wireless doorbell. A one-wire switch detects when the doorbell is activated. The one-wire hub provides a 5v supply and so we use this to power the doorbell to save on batteries.

The home automation sensor polls each device in turn to get a reading. For short events (like a doorbell push) it monitors a latched register which shows if the state of the input has changed since the last time the device was polled. I'll write more about the software side of this and give my source code in a later article.


More photos of 1-wire:

see all at flickr.com


In March 2005 we started recording how we first found out about every security issue that we later fixed as part of our bugzilla metadata. The raw data is available. I thought it would be interesting to summarise the findings. Note that we only list the first place we found out about an issue, and for already-public issues this may be arbitrary depending whoever in the security team creates the ticket first.

So from March 2005-March 2006 we had 336 vulnerabilities with source metadata that were fixed in some Red Hat product:

111  (33%)  vendor-sec
 76  (23%)  relationship with upstream project (Apache, Mozilla etc)
 46  (14%)  public security/kernel mailing list
 38  (11%)  public daily list of new CVE candidates from Mitre
 24   (7%)  found by Red Hat internally
 18   (5%)  an individual (issuetracker, bugzilla, secalert mailing)
 15   (4%)  from another Linux vendors bugzilla (debian, gentoo etc)
  7   (2%)  from a security research firm
  1   (1%)  from a co-ordination centre like CERT/CC or NISCC
(Note that researchers may seem lower than expected, this is because in many cases the researcher will tell vendor-sec rather than each entity individually, or in some cases researchers like iDefense sometimes do not give us notice about issue prior to them making them public on some security mailing list)


Last year I wrote about how both Red Hat and Microsoft shipped the third party Flash browser plugin with their OS and whilst we made it easy for users who were vulnerable to get new versions, Microsoft made it hard. With another critical security issue in Flash last week, George Ou has noticed the same thing.


Just finished the security audit for FC5 - For 20030101-20060320 there are a potential 1361 CVE named vulnerabilities that could have affected FC5 packages. 90% of those are fixed because FC5 includes an upstream version that includes a fix, 1% are still outstanding, and 9% are fixed with a backported patch. Many of the outstanding and backported entries are for issues still not dealt with upstream. For comparison FC4 had 88% by version, 1% outstanding, 11% backported.


What defines transparency? The ability to expose the worst with the best, to be accountable. My risk report was published today in Red Hat Magazine and reveals the state of security since the release of Red Hat Enterprise Linux 4 including metrics, key vulnerabilities, and the most common ways users were affected by security issues.


On Monday a vulnerability was announced affecting the Linux kernel that could allow a remote attacker who can send a carefully crafted IP packet to cause a denial of service (machine crash). This issue was discovered by Dave Jones and allocated CVE CVE-2006-0454. As Dave notes it's so far proved difficult to reliably trigger (my attempts so far succeed in logging dst badness messages and messing up future ICMP packet receipts, but haven't triggered a crash).

This vulnerability was introduced into the Linux kernel in version 2.6.12 and therefore does not affect users of Red Hat Enterprise Linux 2.1, 3, or 4. An update for Fedora Core 4 was released yesterday.


The Washington Post looked at how quickly Microsoft fix security issues rated as Critical in various years.

For 2005, Microsoft fixed 37 critical issues with an average of 46 days from the flaw being known to the public to them having a patch available.

For 2005, Red Hat (across all products) fixed 21 critical issues with an average of 1 day from the flaw being known to the public to having a patch available. (To get the list and a XML spreadsheet, grab the data set mentioned in my previous blog and run "perl daysofrisk.pl --distrib all --datestart 20050101 --dateend 20051231 --severity C").

(The blog also looks at the time between notification to the company and a patch, whilst daysofrisk.pl currently doesn't report that, the raw data is there and I just need to coax it out to see how we compare to the 133 days for Microsoft)


Some quotes of mine have been picked up by various news sources today, talking about how critical vulnerabilities matter more than meaningless issue counts. Anyway, as they say 95% of statistics are meaningless, I wanted to actually explain where the numbers in my quote came from. The quote is about calendar year 2005 and looks just at Red Hat Enterprise Linux 3 (since 4 wasn't out until part way into 2005). In total we fixed 10 critical vulnerabilities (critical by the Microsoft definition, as in the flaw could possibly be exploited remotely by some worm). Our average "days of risk" (the date between an issue being known to the public and us having an update available on Red Hat Network for customers) is under a day, and actually 90% of them were the same day.

But don't take my word for it, a people.redhat.com/mjc download the raw data files and the perl script and run it yourself, in this case

perl daysofrisk.pl --datestart 20050101 --dateend 20051231 --severity C --distrib rhel3

Different distributions, dates, and so on will give you different results, so you might like to customize it to see how well we did fixing the vulnerabilities that you cared about. (Zero days of risk doesn't always mean we knew about issues in advance either, the reported= date in the cve_dates.txt file can help you see when we got advance notice of an issue).




Earlier this year I finally got around to buying a replacement TV, and settled on a Hitachi PD7200 plasma. First thing I noticed was a port on the back labelled 'service use only' which sounded like a fun challenge.

service use only

Turns out that on the Hitachi PD7200 plasma this is a serial port that is enabled by default, so you just need to know the right protocol and you can talk to the plasma. I don't have a PC close to the plasma, but last year I did buy some Lantronix MSS100 devices which have a 10/100 ethernet connection at one end, and a serial RS232 port at the other. I was't quite sure what I'd use them for, but for under 50 pounds each they seemed like a bargain at the time.

serial to parallel converter

Hitachi technical support replied to my query within hours and sent me a couple of PDF documents outlining the protocol, so this was going to be much easier than guesswork.

Knowing the filename, google found this online version, Hitachi control protocol, pdf

A small amount of perl later, and I had script that could query the TV to find out various things (settings, channel, and interestingly the number of hours the TV has been on since it started life). The script can also get the TV to do various things like turn itself on and off and select channels.

Download Hitachi Plasma control (perl, 2k)

So now I have to think of a use for this. I guess I could get the TV to change channels when some event occurs (like one of the motion sensors triggering) or perhaps daily grab the TV panel lifetime to see how many hours of TV we watch a day (and perhaps what channels). Perhaps I could automatically dim the lights and select cinema mode if the channel is changed to DVD (although I could do that just using a programmable remote). I'm sure I'll think of a use for this eventually, but it was a fun diversion for a cold and wet Scottish weekend.

<< prev [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ] next >>

Hi! I'm Mark Cox. This blog gives my thoughts and opinions on my security work, open source, fedora, home automation, and other topics.