Matt Asay wrote an interesting piece last week, that took a rough stab at the "worth" of Open Source code under the care of the Linux Foundation. All the right caveats are there, of course: this isn't really the "worth" of the code, but an approximate cost in developer-years to produce that many lines of code. Fair enough, but when the number that pops out is $5 billion, that says something awesome. No matter how you may want to fiddle with the methodology, there are very few companies on the planet that can or have produced that much code.
Then he threw out the question: does the code under the umbrella of the Apache Software Foundation have that beat? It made me curious ...
I went to OpenHub and got its list of 340 Apache projects. For each project, I fetched the "lines of code" dataset used to produce a project's chart of LOC over time. After some edge case rejects, I had LOC for 332 projects at Apache, that OpenHub knows about. The result?
The ASF represents 177,229,680 lines of code, compared to Linux Foundation's 115 million.
So yes, by this crude measure, the ASF is "worth" something like $7.5 billion.
Talk amongst yourselves...
(obviously, I didn't use Wheeler's COCOMO model, but how far off could the value be on such a large/varied dataset? I think it's also interesting that the ASF provides a space for all this to happen with a budget of only about $1 million a year)
Wednesday, October 07, 2015
Sunday, September 20, 2015
GPASM object files
As part of the work on my home automation system, I've been doing a lot of assembly programming for the PIC16F688. That is my chosen microcontroller for all the various embedded systems around the house.
One of the particular issues that I've run into, is that I've divided the code into modules (like a good little boy). The gputils toolchain supports separate compilation, relocatable code, and linking. SWEET! But this is assembly code. I can't instantiate the I2C slave or master code for a particular pair of pins on the '688. There are tight timing loops, so the code must directly reference the correct port and pin (as opposed to variably-defined values).
One of my control boards talks to TWO I2C busses, and can operate as both slave and master on both busses. Since I must directly reference the port/pin, this means that I need separate compilations of the assembly code for each bus. And then I run into the problem: symbol conflict.
My solution is to rewrite symbols within the library modules for each bus instantiation. So the "start" function for the I2C master (I2C_M_start in the library's object file) is rewritten to HOUSE_I2C_M_start and LOCAL_I2C_M_start.
This works out really well, though I just ran into a problem where one library refers to another library. Not only do I need to rewrite the entrypoint symbols, but also the external reference symbols.
All of this rewriting is done with some Python code. The object files are COFF files, so I wrote a minimalist library to work with GPASM's object files (rather than generic COFF files). Using that library, I have a support script to add prefixes like HOUSE_ or LOCAL_.
Here are my support scripts:
As an aside, I find it rather amusing to go back to assembly programming days, yet find myself still enmeshed within libraries, object files, and linkers.
One of the particular issues that I've run into, is that I've divided the code into modules (like a good little boy). The gputils toolchain supports separate compilation, relocatable code, and linking. SWEET! But this is assembly code. I can't instantiate the I2C slave or master code for a particular pair of pins on the '688. There are tight timing loops, so the code must directly reference the correct port and pin (as opposed to variably-defined values).
One of my control boards talks to TWO I2C busses, and can operate as both slave and master on both busses. Since I must directly reference the port/pin, this means that I need separate compilations of the assembly code for each bus. And then I run into the problem: symbol conflict.
My solution is to rewrite symbols within the library modules for each bus instantiation. So the "start" function for the I2C master (I2C_M_start in the library's object file) is rewritten to HOUSE_I2C_M_start and LOCAL_I2C_M_start.
This works out really well, though I just ran into a problem where one library refers to another library. Not only do I need to rewrite the entrypoint symbols, but also the external reference symbols.
All of this rewriting is done with some Python code. The object files are COFF files, so I wrote a minimalist library to work with GPASM's object files (rather than generic COFF files). Using that library, I have a support script to add prefixes like HOUSE_ or LOCAL_.
Here are my support scripts:
As an aside, I find it rather amusing to go back to assembly programming days, yet find myself still enmeshed within libraries, object files, and linkers.
Saturday, August 22, 2015
My Google Code projects have moved
Back in March, Google announced that the project hosting service on Google Code was shutting down. I wrote a post about why/how we started the service. ... But that closure time has arrived.
There are four projects on Google Code that I work on. Here is the disposition of each one:
There are four projects on Google Code that I work on. Here is the disposition of each one:
- serf
- This has become Apache Serf, under the umbrella of the Apache Software Foundation. Justin and I started serf at Apache back in 2003. Two people are not sufficient for an Apache community, so we moved the project out of the ASF. We had a temporary location, but moved it to Google Code's project hosting at the service's launch, where it has resided for almost 10 years. The project now has a good community and is returning to its original home.
- (link to: old project site)
- pocore
- This is a portability library that I started, as a tighter replacement for APR. Haven't worked on it lately, but will get back to it, as I believe it is an interesting and needed library. I've moved it to GitHub.
- ezt
- This is a very old, very simple yet capable, and mature templating library that I wrote for Python. It is used in many places due to its simplicity and speed. Also moved to GitHub.
- gstein
- This is my personal repository for random non-project work. I open source everything, even if it might not be packaged perfectly for use. Somebody might find utility in a block of code, so I keep it all open. The code in this repository isn't part of a team effort, so I'm not interested in the tooling over at GitHub. I just want an svn repository to keep history, and to keep it offsite. For this repository, I've chosen Wildbit's beanstalk, and the repository has been published/opened.
- (link to: old project site)
I'm sad to see Google Code go away, and I don't consider the above movements ideal. But it's the best I've got right now. Flow with the times...
Saturday, March 14, 2015
Sigh. Google Code project hosting closing down
Google has just let us know that Google Code's project hosting will be shutting down.
On a story over on Ars Technica, there were a lot of misconceptions about why Google chose to provide project hosting. I posted a long comment there, but want to repeat that here for posterity:
As the Engineering Manager behind Google's project hosting's launch, I think some clarifications need to be made here.
In early 2005, SourceForge was not well-maintained, it was hard to use, and it was the only large hosting site available. Chris and I posed the following question, "what would happen if SourceForge went dark tomorrow". … F/OSS apocalypse. SF would take 10's of thousands of projects down with it. This wasn't too far-fetched, given the funding and team assigned to SourceForge.net at the time. Chris and I explored possibilities: provide Google operational support, machines, or just offer to buy it outright. … Our evaluation was: we didn't need to acquire SourceForge. We just needed to provide an alternative. Provide the community with another basket for their eggs.
Myself and three highly-talented engineers put together the project hosting from summer 2005, to its launch at OSCON in July 2006. We let SourceForge know late 2005 what we were doing, and they added staff. We couldn't have been happier! … we never set out to kill them. Just to provide safety against a potential catastrophic situation for the F/OSS community.
Did GitHub provide a better tool? I think so. But recall: that is their business. Google's interest was caretaking for the F/OSS community (much the same as the Google Summer of Code). The project hosting did that for TEN YEARS.
I'm biased, but call that a success.
There are many more hosting options today, compared to what the F/OSS ecosystem was dealing with in 2005 and 2006. I'm very sad to see it close down, but I can understand. Google contributes greatly to F/OSS, but what is the incremental value of their project hosting? Fantastic in 2006, but lower today.
… I hope the above helps to explain where/how Google Code's project hosting came about.
Thursday, January 15, 2015
Disappointing
I've been reading Ars Technica for years. The bulk of what they do: I find awesome.
A recent article used the phrase "Climate Denial" in its title. To me, in terms of the scientific method, there is no such thing as "denial", but simply "critical" or "questioning" or "not convinced". "Skeptical", if you will. All of these labels are fine, as they acknowledge that the hypothesis in question (AGW) is being tested. But "denial" has been used to shut down conversation, as if critical examination is no longer allowed.
So I posted my thoughts, in the forum attached to that article, basically repeating the above.
Ars Technica appears to have disliked my points about questioning. and that falsifiability is no longer applicable to AGW. So they closed my forum post, marking it as "trolling".
The ridiculous thing is that somebody even replied to my post, pointing out "scientific consensus" on Wikipedia, yet that article specifically discusses that certain theories can never be proven. Only disproven (ref: falsifiability, above). So when you find a hypothesis in this pattern... the approach is to disprove.
But nope. Ars Technica shut me down.
I will still read you, Ars. I like your content. But when you shut down discussion? And call it trolling, despite some kind of rational basis, and an attempt at civil discussion?
No. That is wrong, and I have lost respect for what you do.
A recent article used the phrase "Climate Denial" in its title. To me, in terms of the scientific method, there is no such thing as "denial", but simply "critical" or "questioning" or "not convinced". "Skeptical", if you will. All of these labels are fine, as they acknowledge that the hypothesis in question (AGW) is being tested. But "denial" has been used to shut down conversation, as if critical examination is no longer allowed.
So I posted my thoughts, in the forum attached to that article, basically repeating the above.
Ars Technica appears to have disliked my points about questioning. and that falsifiability is no longer applicable to AGW. So they closed my forum post, marking it as "trolling".
The ridiculous thing is that somebody even replied to my post, pointing out "scientific consensus" on Wikipedia, yet that article specifically discusses that certain theories can never be proven. Only disproven (ref: falsifiability, above). So when you find a hypothesis in this pattern... the approach is to disprove.
But nope. Ars Technica shut me down.
I will still read you, Ars. I like your content. But when you shut down discussion? And call it trolling, despite some kind of rational basis, and an attempt at civil discussion?
No. That is wrong, and I have lost respect for what you do.
Sunday, October 05, 2014
Raspberry Pi and (lack of) I2C Repeated Starts
Just spent several hours digging into a communication bug between my Raspberry Pi and a SparkFun MPR121 breakout board. I found two core problems: the MPR121 requiring Repeated Starts in its I2C communication, and the RPi's BCM2835 not implementing them.
The MPR121 uses I2C to communicate with its host. There are over a hundred registers in the MPR121 that can be read. From a functional standpoint, it would look something like:
The requirement for a Repeated Start is eminently sensible in a multi-master environment, although the MPR121 documentation does not call out this requirement. My own experimentation and a bit of Google action confirms this. Sensible, yes, but poorly documented. (Though I have to say: the MPR121 doc and application notes are overall very outstanding! Usually, I use their I2C timing diagrams rather than the formal I2C specification)
It is also important to note that most environments are single-master, so a Repeated Start wouldn't be necessary, and the requirement is a potential burden upon the I2C master (compared to a standard write/read pair of operations).
My test code was using the Python "smbus" module to speak I2C to the MPR121 breakout. Everything was quite straight forward, and there are several tutorials and pages on the web to show you how to set up I2C on an RPi, and to use Python to control it.
In my research, I found there are a number of I2C peripherals that require a Repeated Start. Presumably for transaction purposes (as noted above). Most work just fine with a Stop/Start pair, which will work in the typical single master environment.
MPR121 Requires Repeated Starts
The MPR121 uses I2C to communicate with its host. There are over a hundred registers in the MPR121 that can be read. From a functional standpoint, it would look something like:
uint8 value = read_register(uint7 addr, uint8 which_reg)
On the I2C bus, the bits/frames look something like:
| Start | addr/W | which | R-Start | addr/R | data | Stop |The Repeated Start allows the host to hold the bus during this "write which register we want, now read it" transaction. If a normal Stop-then-Start sequence was performed, then you have a time where the bus was released, allowing another I2C master to take control. That race condition could allow the other master to change the register-to-read. The original I2C master would then get bad data.
The requirement for a Repeated Start is eminently sensible in a multi-master environment, although the MPR121 documentation does not call out this requirement. My own experimentation and a bit of Google action confirms this. Sensible, yes, but poorly documented. (Though I have to say: the MPR121 doc and application notes are overall very outstanding! Usually, I use their I2C timing diagrams rather than the formal I2C specification)
It is also important to note that most environments are single-master, so a Repeated Start wouldn't be necessary, and the requirement is a potential burden upon the I2C master (compared to a standard write/read pair of operations).
BCM2835 Lack of Repeated Starts
My test code was using the Python "smbus" module to speak I2C to the MPR121 breakout. Everything was quite straight forward, and there are several tutorials and pages on the web to show you how to set up I2C on an RPi, and to use Python to control it.
But when I tried to read the "touched" results for the 10 pads (two bytes), I kept getting the same byte values. The first byte (in register 0x00) has the first eight pads, and the second byte (at 0x01) has the next four. But that second byte was always the same, and changed right along with the first byte as I touched various pads.
After thorough digging through code, trying alternative approaches, oscilloscope review of the I2C bus, etc, it became apparent that upon receiving a Stop condition, the MPR121 would reset the which-register value to 0x00. Thus, I'd tell the MPR121 "read from 0x01. Stop. give me the byte.", and it would always return the value from register 0x00.
After further investigation and reading: the BCM2835 does not implement Repeated Starts. There is no way for it to read from arbitrary registers of the MPR121. Some people have attempted gimmicks around the BCM2835's 10-bit addressing feature, but I'll avoid that.
However, you can read all the sensor data by reading two bytes, starting at register 0x00. I2C supports a block data transfer, so this is quite straightforward. In fact, the host doesn't ever have to say "start reading from 0x00" since that is the default. It can just issue a read for two bytes.
In order for an RPi to read arbitrary registers, it would be necessary to use "bit-banging" on the GPIO pins to manually run through the I2C protocol. Code out there exists, if this feature is needed.
Summary
In my research, I found there are a number of I2C peripherals that require a Repeated Start. Presumably for transaction purposes (as noted above). Most work just fine with a Stop/Start pair, which will work in the typical single master environment.
In my home automation scenario, the host will (always?) be a PIC16F688 microcontroller running my own I2C master code. Needless to say, it will incorporate the Repeated Start capability.
Hopefully, my research will help your own use of an MPR121, or the I2C bus on a Raspberry Pi.
Capacitive Touch Sensors, revisited

I got my first batch of sensor pad boards a year ago, but never got around to actually writing a post about them. The rather poor picture to the right shows the pads and the ground-hatch on the back. You can even make out an Apache Subversion revision tag on the silk screen :-)
It has a few problems, however: sizing is incorrect for our 1-gang electrical boxes, it is missing drill/mounting holes, and there are no cutouts so the backlighting can shine through. Whoops! When the new board is designed and back from production, then I'll get some good pictures posted.
So why post now? It's been a year!! ... well, I finally got around to pairing this sucker up with SparkFun's MPR121 breakout board. Connected that to a Raspberry Pi, and started Real Testing. I may have to tweak the pads and traces a bit for Rev2, but it is doing very well for the first iteration.
The largest obstacle to getting this functioning with the MPR121 was failure to use its "Auto Config" feature. Once I let the chip figure out what the hell it is connected to... it worked like a dream.
(see next post, re: problems talking to an MPR121 from a Raspberry Pi)
Friday, August 15, 2014
API Endpoints for Arduinos
As part of my home automation system, I need to connect "high-level" systems (such as the primary Linux server) down into the underlying hardware systems around the house. The PIC16F688 microcontrollers that run those systems are seriously low-level. Thus, I've chosen to place all of them onto an I2C bus(*) driven by an Arduino. Why? ... the Arduino has enough capability to mount an Ethernet port on the "high" side, and has built-in I2C support for the "low" side. It is a great mapping device between the complex systems, and the hardware systems.
With the hardware selected, and the wiring selected, it came down to protocol. How can that Linux server talk to the Arduino, as a proxy to the individual microcontroller critters splattered across the household? ... Naturally, HTTP was my first choice, as it allows all kinds of languages, libraries, and scripts to perform the work, and even some basic interaction via a full-on browser.
Digging into HTTP servers for the Arduino... Ugh. The landscape is very disappointing. Much of the code is poorly engineered, or the code is designed for serving web pages. During my search, I ran into a colleague's TinyWebServer. I'd call it the best out there for web serving (well-designed and well-coded), but is still overpowered for my purpose: mapping a protocol down to simple hardware interactions.
As a result, I designed a small library to construct a simple API endpoint on the Arduino. The application can register dozens of endpoints to process different types of client requests (62 endpoints are easy, more if you want to seek out the critical edges of allowed-characters in URIs). Each endpoint can accept a number of bytes as parameters, and can return zero, or an unlimited set of bytes to the client.
I have yet to "package" the Endpoint system, so any requests and changes are most welcome! I've got some documentation to write, along with incorporating feedback from others' and my own usage.
Would love to get some feedback! ==> http://goo.gl/O5GJ3g
(*) and yes, I know an I2C bus is designed for 0.5m meter runs, rather than a whole house; bus accelerators, speed compensation, and other approaches "should" manage it. I'll report on my success/failure in a future post.
With the hardware selected, and the wiring selected, it came down to protocol. How can that Linux server talk to the Arduino, as a proxy to the individual microcontroller critters splattered across the household? ... Naturally, HTTP was my first choice, as it allows all kinds of languages, libraries, and scripts to perform the work, and even some basic interaction via a full-on browser.
Digging into HTTP servers for the Arduino... Ugh. The landscape is very disappointing. Much of the code is poorly engineered, or the code is designed for serving web pages. During my search, I ran into a colleague's TinyWebServer. I'd call it the best out there for web serving (well-designed and well-coded), but is still overpowered for my purpose: mapping a protocol down to simple hardware interactions.
As a result, I designed a small library to construct a simple API endpoint on the Arduino. The application can register dozens of endpoints to process different types of client requests (62 endpoints are easy, more if you want to seek out the critical edges of allowed-characters in URIs). Each endpoint can accept a number of bytes as parameters, and can return zero, or an unlimited set of bytes to the client.
I have yet to "package" the Endpoint system, so any requests and changes are most welcome! I've got some documentation to write, along with incorporating feedback from others' and my own usage.
Would love to get some feedback! ==> http://goo.gl/O5GJ3g
(*) and yes, I know an I2C bus is designed for 0.5m meter runs, rather than a whole house; bus accelerators, speed compensation, and other approaches "should" manage it. I'll report on my success/failure in a future post.
Friday, August 01, 2014
CanaKit PIC Programmer Reset
Earlier tonite, I scorched one of my 16F688 PICs in a stupid move. To see if it was recoverable/usable, I dropped the sucker into my CanaKit programmer and ran 'pk2cmd -P' on my Mac to check whether it could see/detect the PIC.
Bad move.
"No PICkit 2 found." ... and the red BUSY light just started flashing.
It took some research because CanaKit has almost zero documentation. The short answer is the programmer is a clone of Microchip's PICkit 2 Development Programmer/Debugger. The red flashy means the board couldn't load its firmware.
Solution:
Download the V2.32 firmware from the above page. Unzip the file. Then load it onto the board:
$ pk2cmd -D/path/to/firmware/PK2V023200.hex
It'll load the firmware, verify it, then reset the device. No more angry LED!
(I hope those who run into a similar problem will find this blog post, to more quickly reach a solution)
Bad move.
"No PICkit 2 found." ... and the red BUSY light just started flashing.
It took some research because CanaKit has almost zero documentation. The short answer is the programmer is a clone of Microchip's PICkit 2 Development Programmer/Debugger. The red flashy means the board couldn't load its firmware.
Solution:
Download the V2.32 firmware from the above page. Unzip the file. Then load it onto the board:
$ pk2cmd -D/path/to/firmware/PK2V023200.hex
It'll load the firmware, verify it, then reset the device. No more angry LED!
(I hope those who run into a similar problem will find this blog post, to more quickly reach a solution)
Friday, June 06, 2014
LED lighting control boards
Woot! I just ordered a set of PCBs for lighting control in the new house. Wifey would kill me if we had no lights, so the priority kinda jumped on this one ... (*Wifey comments: I wouldn't kill you. However, I consider it prudent to go to Home Depot and buy some cheapo mechanical switches just in case)
One of the more unique things about my (upcoming) house is that the bulk of our lighting are LED 24 VDC cans/pots/recessed lamps(*). Ceilings fans, chandeliers, and watercloset light/fans are standard high-voltage AC, but the bulk of our electrical fixtures are using these nifty low-voltage lamps. One neat thing is that the lamps don't require AC/DC stepdown transformers, so their cost is dropped, their reliability increases, and their heatprint is lowered.
The wiring becomes both simpler, and more complex in ways, and possibly cheaper depending on your choices. Simpler because you can run standard 14 gauge stranded speaker wire (cheap; flexible around those corners) instead of the heavier gauge Romex stuff. The lower gauge wire can save you quite a bit on copper costs, and you can lose the heavy insulation and concerns of high-voltage wires in your walls. But much more complicated because you need special hardware to run them ... a repair is no longer just a run to your local Home Depot.
In my case, much of the wiring in the house has been "home-run" back to my server room, so I ended up doubling my copper/install costs. All the control is localized to that room, which also means I don't have actual "switches" in my house, but just sensors (see Capacitive Touch Wall Switches). This leads to "how do I control these lighting circuits?"
Thus, my custom PWM Control Boards. They have an embedded microcontroller which can "do anything" with the nine (9) output channels. A higher-level "upstream" controller will tell it what actions to perform, when, and how, communicating via TTL Serial or I2C or what-have-you. The boards sink current, at whatever voltage (so I will also use these for my 12VDC LED light strips).
I've sent the PCBs off to ITEAD for production. Between now and when they arrive, I'll finish the microcontroller work that I've been working on. Assemble some parts from Mouser, plug it all together, and LIGHTS!
Theoretically.
Will report back, as I make progress...
(*) I decided not to name/link the manufacturer until I get experience and form an opinion. Just email me to ask, if you're interested.
One of the more unique things about my (upcoming) house is that the bulk of our lighting are LED 24 VDC cans/pots/recessed lamps(*). Ceilings fans, chandeliers, and watercloset light/fans are standard high-voltage AC, but the bulk of our electrical fixtures are using these nifty low-voltage lamps. One neat thing is that the lamps don't require AC/DC stepdown transformers, so their cost is dropped, their reliability increases, and their heatprint is lowered.
The wiring becomes both simpler, and more complex in ways, and possibly cheaper depending on your choices. Simpler because you can run standard 14 gauge stranded speaker wire (cheap; flexible around those corners) instead of the heavier gauge Romex stuff. The lower gauge wire can save you quite a bit on copper costs, and you can lose the heavy insulation and concerns of high-voltage wires in your walls. But much more complicated because you need special hardware to run them ... a repair is no longer just a run to your local Home Depot.
In my case, much of the wiring in the house has been "home-run" back to my server room, so I ended up doubling my copper/install costs. All the control is localized to that room, which also means I don't have actual "switches" in my house, but just sensors (see Capacitive Touch Wall Switches). This leads to "how do I control these lighting circuits?"
Thus, my custom PWM Control Boards. They have an embedded microcontroller which can "do anything" with the nine (9) output channels. A higher-level "upstream" controller will tell it what actions to perform, when, and how, communicating via TTL Serial or I2C or what-have-you. The boards sink current, at whatever voltage (so I will also use these for my 12VDC LED light strips).
I've sent the PCBs off to ITEAD for production. Between now and when they arrive, I'll finish the microcontroller work that I've been working on. Assemble some parts from Mouser, plug it all together, and LIGHTS!
Theoretically.
Will report back, as I make progress...
(*) I decided not to name/link the manufacturer until I get experience and form an opinion. Just email me to ask, if you're interested.
Monday, September 09, 2013
Capacitive Sensing Wall Switches
For my new house, I wanted to go with an unobtrusive, "no controls" style. There are no thermostats, no pool controller, no security panels, and no standard toggle/paddle switches for the lights. Instead, I'm going for a designer style piece of colored glass with capacitive touch sensors behind it. Think of your smart phone's touch capability, and that will be my light switches.
The "switches" are really just sensors, as the signals are delivered to the main house controller where the actual lighting management occurs. The sensor pad can detect touches, gestures, and multi-taps to provide different lighting requests, based on time of day and ambient light. The switches also include optional hookups to an IR/motion sensor, temperature measurement, and all switches have RGB backlighting.
All my work will be Open Source, so I've started the documentation for the wall switches. I have a couple custom PCBs that I'm working on, and (eventually) a whole mess of microcontroller code for the PIC16F688.
I'll keep updating the wiki pages, and committing changes to my repository. I don't have commit emails going anywhere, but Google Code supports feeds for wiki and source changes if you'd like to track the changes. Of course, I'll keep posting here when interesting milestones occur!
The "switches" are really just sensors, as the signals are delivered to the main house controller where the actual lighting management occurs. The sensor pad can detect touches, gestures, and multi-taps to provide different lighting requests, based on time of day and ambient light. The switches also include optional hookups to an IR/motion sensor, temperature measurement, and all switches have RGB backlighting.
All my work will be Open Source, so I've started the documentation for the wall switches. I have a couple custom PCBs that I'm working on, and (eventually) a whole mess of microcontroller code for the PIC16F688.
I'll keep updating the wiki pages, and committing changes to my repository. I don't have commit emails going anywhere, but Google Code supports feeds for wiki and source changes if you'd like to track the changes. Of course, I'll keep posting here when interesting milestones occur!
Monday, August 12, 2013
Bluetooth Household
For my home automation project, I chose to go with a fully-wired approach since I'm building from scratch. I get to run wires wherever needed, providing (hopefully) more reliability and an ability to troubleshoot problems. Certainly, the parts cost will be much lower than an RF solution.
But with that said, I met Eric Migicovsky a couple weeks ago (Founder/CEO of Pebble). He came up with a great idea: use a Pebble watch as a control mechanism. Sure, I'll have phones, tablets, infrared remotes, and various sensors... but something on my wrist? Always handy? Very cool idea! With multiple Bluetooth base stations, I can even detect signal strength and triangulate a user's position in the house, in order to provide context-sensitive menus and button controls. If you're in the home theater, then "Pause" is going to be a handy watch button, when that drink needs a refill! Given that I'm writing the app, I can even provide Wifey with her own customized watch experience.
To that end, I started doing some research on Bluetooth, and on the Pebble SDK. The first thing to pop up was the need to use Bluetooth Low Energy (aka Bluetooth 4.0, BLE, or Bluetooth Smart [Ready]) rather than the older Bluetooth 2.x or 3.x protocols. BLE allows for interactions without pairing, which is important for roaming about the house, with multiple base stations. The Pebble hardware supports BLE, but it seems that the SDK doesn't (yet) allow for applications to deliver messages to one/more/available base stations. My hope is to work with the Pebble guys to see where/how to make that available to (my) home automation application.
The second part of the problem is the development of the base stations for my house. There are inexpensive Bluetooth/USB dongles (about US$11) that can speak BLE. I've got a few Raspberry Pi boards around the house, with previously-unused USB ports. A little searching seems to indicate the dongles are supported under Linux.
These dongles seem to present themselves as an HID device (eg. keyboard, mouse, etc), and can be switched to a [Bluetooth] Host Controller Interface (HCI). I haven't dug in deeply on this stuff yet, but I do have a Fitbit dongle on my Mac OS. The Fitbit (Flex) speaks BLE, so it seemed appropriate to experiment with.
Working with HID seemed harsh, until I found hidapi. The API is very clean and simple. As a Python programmer, bindings were the next step. Ran across Cython-HIDAPI, which sucks: forked copy of HIDAPI and heavyweight Cython-based bindings (given the ugly, I'm not gonna provide link-love).
Answer: I wrote a ctypes-based binding for hidapi. My first, undocumented draft landed at just 143 lines of Python. Of course, I've checked it in, along with a sample script.
And after all that, my Fitbit dongle is purely a USB device (calling hid_open() fails). Sigh.
I've got more research to do, and maybe ordering a dongle for experimentation (see Adafruit, or various on Amazon). Maybe I can interact with the Fitbit dongle through USB rather than HID. Who knows. But once I figure the base station thing out, I can track Pebble watches, Fitbits, and other Bluetooth devices throughout my house.
But with that said, I met Eric Migicovsky a couple weeks ago (Founder/CEO of Pebble). He came up with a great idea: use a Pebble watch as a control mechanism. Sure, I'll have phones, tablets, infrared remotes, and various sensors... but something on my wrist? Always handy? Very cool idea! With multiple Bluetooth base stations, I can even detect signal strength and triangulate a user's position in the house, in order to provide context-sensitive menus and button controls. If you're in the home theater, then "Pause" is going to be a handy watch button, when that drink needs a refill! Given that I'm writing the app, I can even provide Wifey with her own customized watch experience.
To that end, I started doing some research on Bluetooth, and on the Pebble SDK. The first thing to pop up was the need to use Bluetooth Low Energy (aka Bluetooth 4.0, BLE, or Bluetooth Smart [Ready]) rather than the older Bluetooth 2.x or 3.x protocols. BLE allows for interactions without pairing, which is important for roaming about the house, with multiple base stations. The Pebble hardware supports BLE, but it seems that the SDK doesn't (yet) allow for applications to deliver messages to one/more/available base stations. My hope is to work with the Pebble guys to see where/how to make that available to (my) home automation application.
The second part of the problem is the development of the base stations for my house. There are inexpensive Bluetooth/USB dongles (about US$11) that can speak BLE. I've got a few Raspberry Pi boards around the house, with previously-unused USB ports. A little searching seems to indicate the dongles are supported under Linux.
These dongles seem to present themselves as an HID device (eg. keyboard, mouse, etc), and can be switched to a [Bluetooth] Host Controller Interface (HCI). I haven't dug in deeply on this stuff yet, but I do have a Fitbit dongle on my Mac OS. The Fitbit (Flex) speaks BLE, so it seemed appropriate to experiment with.
Working with HID seemed harsh, until I found hidapi. The API is very clean and simple. As a Python programmer, bindings were the next step. Ran across Cython-HIDAPI, which sucks: forked copy of HIDAPI and heavyweight Cython-based bindings (given the ugly, I'm not gonna provide link-love).
Answer: I wrote a ctypes-based binding for hidapi. My first, undocumented draft landed at just 143 lines of Python. Of course, I've checked it in, along with a sample script.
And after all that, my Fitbit dongle is purely a USB device (calling hid_open() fails). Sigh.
I've got more research to do, and maybe ordering a dongle for experimentation (see Adafruit, or various on Amazon). Maybe I can interact with the Fitbit dongle through USB rather than HID. Who knows. But once I figure the base station thing out, I can track Pebble watches, Fitbits, and other Bluetooth devices throughout my house.
Tuesday, April 16, 2013
Building omxplayer on a Raspberry Pi
The past couple days, I set aside my PIC work and concentrated on setting up a Raspberry Pi ("RPi"). I've got a couple of these, and will use them as video streamers for televisions in my house.
There is quite a bit of documentation for getting an RPi set up, so I won't repeat that here. My current focus is on getting video streaming working. An obvious candidate is RaspBMC, but I was looking for something very bare-bones to simply put a video onto the HDMI output. I ran across PyPlex which seemed right up my alley: Python and effectively an interface-less control of the video.
Yah. Well. Then I look at the setup/build requirements. twisted-web? pexpect? Seriously? Somebody has made this much more complicated than it should be. Whatever happened to just using BaseHTTPServer and the subprocess module?
Digging in, I find it is using omxplayer underneath. No wonder they're using pexpect -- there is a tty/keyboard interface to omxplayer. (of course, pty might be simpler than pexpect, but whatever) So this PyPlex thing starts up a web service and then controls omxplayer via a tty connection. I'm not seeing reliability and responsiveness here. And a lot of code, to boot.
Tearing off another layer of the onion, I start looking at omxplayer. Sigh. Requirements hell yet again. GCC 4.7. Boost. ffmpeg. Oh, and it is generally set up for cross-compilation rather than building on the RPi. This isn't a bad concept in general, as the RPi is no speed demon. But the build only takes a long time because they chose ffmpeg, whereas the Raspbian distribution uses libav. (these two libraries are reasonably similar, as libav forked from ffmpeg rather nastily a couple years ago)
So I'm looking at this giant pile of C++ code with a bunch of crazy requirements, which would take hours to build on my RPi. This is the wonderful state of video on the RPi. Sigh.
Well... I found a post by Keith Wright where he talks about building (a tweaked fork) of omxplayer on Raspbian. Much better, but the instructions still have crazy oddities about reconfiguring RAM, sudo to build in strange filesystem locations, and hey! fun! building ffmpeg from scratch again. Sigh. A guy just can't get any love around here.
Being the good geek that I am... this just wasn't something that I want to put up with. I want to build this sucker on my RPi, using standard tooling and libraries that come on Raspbian.
First up, I started from huceke/omxplayer rather than Keith's because it is much newer. But I did grab the Makefile.include from Keith, as it was sane for building on the RPi. Adjusted some of the paths to point to the installed items. Then, I had to install the following packages on the RPi: libpcre3-dev, libavcodec-dev, libavdevice-dev, libavfilter-dev, libavformat-dev, libboost-dev. As I started working through getting omxplayer built, I ran into a bug in a system header.
In /opt/vc/include/interface/vmcs_host/vcgencmd.h, line 33 needs to be changed to:
Next up, I had to hack away, tweak, and otherwise put a bit of pain on the omxplayer sources. Some hacks were easy, but others likely broke stuff (I'm not sure if subtitles work any more). Hard to tell. A/V code is not easy, and not something that I'm familiar with.
You can find all of my changes in my omxplayer fork. Clone that to your RPi, install the necessary packages, and hit "make". No system reconfiguration. No sudo. No hours of ffmpeg building. No GCC 4.7 requirement.
Clone. Make.
Go have fun, and watch some movies!
(my next step is to tear off the user interface bits, and shift to a simpler, pure-C library which I can wrap/control from Python)
There is quite a bit of documentation for getting an RPi set up, so I won't repeat that here. My current focus is on getting video streaming working. An obvious candidate is RaspBMC, but I was looking for something very bare-bones to simply put a video onto the HDMI output. I ran across PyPlex which seemed right up my alley: Python and effectively an interface-less control of the video.
Yah. Well. Then I look at the setup/build requirements. twisted-web? pexpect? Seriously? Somebody has made this much more complicated than it should be. Whatever happened to just using BaseHTTPServer and the subprocess module?
Digging in, I find it is using omxplayer underneath. No wonder they're using pexpect -- there is a tty/keyboard interface to omxplayer. (of course, pty might be simpler than pexpect, but whatever) So this PyPlex thing starts up a web service and then controls omxplayer via a tty connection. I'm not seeing reliability and responsiveness here. And a lot of code, to boot.
Tearing off another layer of the onion, I start looking at omxplayer. Sigh. Requirements hell yet again. GCC 4.7. Boost. ffmpeg. Oh, and it is generally set up for cross-compilation rather than building on the RPi. This isn't a bad concept in general, as the RPi is no speed demon. But the build only takes a long time because they chose ffmpeg, whereas the Raspbian distribution uses libav. (these two libraries are reasonably similar, as libav forked from ffmpeg rather nastily a couple years ago)
So I'm looking at this giant pile of C++ code with a bunch of crazy requirements, which would take hours to build on my RPi. This is the wonderful state of video on the RPi. Sigh.
Well... I found a post by Keith Wright where he talks about building (a tweaked fork) of omxplayer on Raspbian. Much better, but the instructions still have crazy oddities about reconfiguring RAM, sudo to build in strange filesystem locations, and hey! fun! building ffmpeg from scratch again. Sigh. A guy just can't get any love around here.
Being the good geek that I am... this just wasn't something that I want to put up with. I want to build this sucker on my RPi, using standard tooling and libraries that come on Raspbian.
First up, I started from huceke/omxplayer rather than Keith's because it is much newer. But I did grab the Makefile.include from Keith, as it was sane for building on the RPi. Adjusted some of the paths to point to the installed items. Then, I had to install the following packages on the RPi: libpcre3-dev, libavcodec-dev, libavdevice-dev, libavfilter-dev, libavformat-dev, libboost-dev. As I started working through getting omxplayer built, I ran into a bug in a system header.
In /opt/vc/include/interface/vmcs_host/vcgencmd.h, line 33 needs to be changed to:
#include "interface/vmcs_host/linux/vchost_config.h"I've filed a pull request to github:raspberrypi/firmware to fix this. Not sure if that is the Right place (that code may come from upstream?), but hopefully somebody will see it.
Next up, I had to hack away, tweak, and otherwise put a bit of pain on the omxplayer sources. Some hacks were easy, but others likely broke stuff (I'm not sure if subtitles work any more). Hard to tell. A/V code is not easy, and not something that I'm familiar with.
You can find all of my changes in my omxplayer fork. Clone that to your RPi, install the necessary packages, and hit "make". No system reconfiguration. No sudo. No hours of ffmpeg building. No GCC 4.7 requirement.
Clone. Make.
Go have fun, and watch some movies!
(my next step is to tear off the user interface bits, and shift to a simpler, pure-C library which I can wrap/control from Python)
Sunday, April 14, 2013
PIC Programming on Mac OS X
Lately, I've begun working on home automation to wire up my entire house with all kinds of goodies. The hobbyist in me, and with an attempt to be frugal, I'm skipping off-the-shelf solutions and building my own. A friend of mine calls me crazy, that it will reduce the value of my house, etc. Whatever. This is some fun stuff!
A big part of these systems is wiring together sensors and activators. You need something to control all of these things. There are a gazillion various solutions, with the obvious ones being an Arduino or Raspberry Pi and their GPIO pins (among many other features they have). I decided on a layered approach with "small brains" connected to the sensor, managing the specifics, then communicating upstream to a larger brain. I'll likely talk about the upstream bits in a later post, but this one is dedicated to the small brain: the PIC 16F688 microcontroller.
I grew up on a 6502 microprocessor. Graduated to a 68000 when it arrived in the Mac 128k. And after that, never really worried about machine/assembly code. As I looked around for microcontrollers, I ran into the 16F688 on SparkFun. This is a crazy chip -- the number of features packed into a tiny 14-pin DIP is simply incredible compared to where I came from. A couple key features that pointed me at this chip: UART on board, and about $2 per part. This allows me to do stuff such as communicate to serial sensors (such as the Zilog ePIR IR sensor), use bit-banging to communicate to I2C sensors (such as the MPR121 capacitive touch sensor), measure voltages for security systems and temperature (TMP36), ... and all in a tiny and cheap package.
Next up is programming the dang thing. I've got a programmer and my Macbook. This post will document the steps needed to get some code running on the PIC. (to help others, and if I have to retrace these steps in the future)
First, you will need the following software packages installed:
A big part of these systems is wiring together sensors and activators. You need something to control all of these things. There are a gazillion various solutions, with the obvious ones being an Arduino or Raspberry Pi and their GPIO pins (among many other features they have). I decided on a layered approach with "small brains" connected to the sensor, managing the specifics, then communicating upstream to a larger brain. I'll likely talk about the upstream bits in a later post, but this one is dedicated to the small brain: the PIC 16F688 microcontroller.
I grew up on a 6502 microprocessor. Graduated to a 68000 when it arrived in the Mac 128k. And after that, never really worried about machine/assembly code. As I looked around for microcontrollers, I ran into the 16F688 on SparkFun. This is a crazy chip -- the number of features packed into a tiny 14-pin DIP is simply incredible compared to where I came from. A couple key features that pointed me at this chip: UART on board, and about $2 per part. This allows me to do stuff such as communicate to serial sensors (such as the Zilog ePIR IR sensor), use bit-banging to communicate to I2C sensors (such as the MPR121 capacitive touch sensor), measure voltages for security systems and temperature (TMP36), ... and all in a tiny and cheap package.
Next up is programming the dang thing. I've got a programmer and my Macbook. This post will document the steps needed to get some code running on the PIC. (to help others, and if I have to retrace these steps in the future)
First, you will need the following software packages installed:
- pk2cmd (download from Microchip)
This uses a standard configure; make; make install - gputils (see their SourceForge project)
I used "make mac105", then symlink'd "pk2cmd" and "PK2DeviceFile.dat" into my PATH
Note: I did not setuid-root on the binary (as the docs seem to suggest). It seems to operate fine with my login id.
When I plug my programmer into the USB port and run "pk2cmd -P" it detects my PIC16F688. Woot!
And for a basic "Hello World" to test my setup, I wrote a "blink an LED" program. Download that and type "make install" and it should assemble and load the program onto the PIC sitting in your programmer. Yank out the PIC, wire up RA0 to an LED tied to Vdd (I used a 680Ω resistor), and apply power. The LED should blink at you.
Not that hard!
If it doesn't? Well. Not like you can bring up Visual Studio and debug this thing. The program works for me, and the wiring is dead simple, so I wouldn't know where to point you.
Next up: switch my blinky program to use the chip's Sleep Mode and interrupts [rather than a busy loop]. Less power consumption!
Wednesday, March 07, 2012
Oroppas, by St Clement
So it was finally time to open and drink my flight of St Clement's Oroppas wine (info on 2007 bottling). It's been waiting too long, so the wife and I decided to cook a wonderful dinner and start popping open bottles.
My oldest bottle was from 1995. Waiting 17 years is certainly too long, but what's done is done. The 1995 still had a lot of flavor, rich notes, but with a very short finish (as expected).
Sunday night, we opened up the first four bottles (1995, 1996, 1997, and 1998). Since these bottles were old, I figured they wouldn't be strong representatives of Oroppas. So... I also opened up a 2007 Oroppas; the tasting notes said it was just getting ready to drink. The 2007 was our "control" bottle to really show the bold, smooth flavors of St Clement's Oroppas series.
As I mentioned, the 1995 was still good, but with a short finish. The 1996 was tasty, starting to show some of Oroppas' deeper flavor.
Tuesday night, we opened the last two... the 2001 and 2002. This pair was very similar to Monday night's bottles: one smooth and full-bodied, and the other was a bit less. The 2001 was excellent. It had all the favor, smoothness, fruit and berry, and richness expected. The 2002 was a bit weaker on the fruit and the tannins were starting to creep in.
Okay... Wednesday evening, we opened a 2004 Oroppas as the final bottle in this tasting. It was hiding behind the flight, in the picture above (along with the 2007). Based on tasting all of these wines, I have to say: the 2004 is the absolute best of the entire pack. It has all of the fruit, boldness, smooth flavor, and only a little tannin. I would suggest that holding Oroppas for right around eight years (from the vintage date) is its ideal. (well... based on my tastes)
Overall, please remember that I'm talking comparisons here. All of these bottles were very tasty. There is just no way to call any of these bad wines. As expected, the 1995 had weaker flavor yet no tannins. Moving forward in time, the flavor definitely improved, but the true Oroppas boldness did not show up until about 1998 or so. Around 2001 or 2002, a light shade of tannins started to arrive. The 2004 seemed to be the peak in the bottles that I had.
I've been a club member of St Clement for 10 or 15 years. Their whites, reds, and specialities like the Oroppas are all fabulous wines. Their winery is a big Victorian up on a hill; it is beautiful, with a wonderful view over Napa Valley. I'm definitely a fan, and this flight has been an awesome experience. I highly recommend their wines, and a visit to their wonderful property.
Cheers!
![]() |
Flight of Oroppas, 1995 through 2002 |
Sunday night, we opened up the first four bottles (1995, 1996, 1997, and 1998). Since these bottles were old, I figured they wouldn't be strong representatives of Oroppas. So... I also opened up a 2007 Oroppas; the tasting notes said it was just getting ready to drink. The 2007 was our "control" bottle to really show the bold, smooth flavors of St Clement's Oroppas series.
Strangely enough, the 1997 was a step backwards. The wine was a bit sharp and acidic, unlike the big bold flavor of the 2007. Even the 1996 demonstrated some of that boldness. Given the usual success of Napa wines from 1997, I was quite surprised. Thankfully, moving onwards to the 1998 put everything back on track. The 1998 was a good representatve of that Oroppas flavor and style. Bold and smooth, with lots of rich flavor and a great mouth feel. The 2007 had even more flavor, but still contained some of the rough tannins of a young wine; the 1998 had none of that roughness and much (though not all) of the flavor. I will probably wait at least two or three years before opening more of the 2007, to age away some of the tannins.
By mid-evening on Monday, we had finished off the 1995, 1996, and 1997. We kept around some of the 1998 and the 2007 for comparison, as I popped open the 1999 and 2000. The progression from 1998 was quite obvious with the 1999 wine -- the flavor and finish just got bigger. But the 2000 was missing the strong berry and fruit undertones, leaving just a woody finish. The 1999 was the clear winner in these few years.
Tuesday night, we opened the last two... the 2001 and 2002. This pair was very similar to Monday night's bottles: one smooth and full-bodied, and the other was a bit less. The 2001 was excellent. It had all the favor, smoothness, fruit and berry, and richness expected. The 2002 was a bit weaker on the fruit and the tannins were starting to creep in.
Okay... Wednesday evening, we opened a 2004 Oroppas as the final bottle in this tasting. It was hiding behind the flight, in the picture above (along with the 2007). Based on tasting all of these wines, I have to say: the 2004 is the absolute best of the entire pack. It has all of the fruit, boldness, smooth flavor, and only a little tannin. I would suggest that holding Oroppas for right around eight years (from the vintage date) is its ideal. (well... based on my tastes)
Overall, please remember that I'm talking comparisons here. All of these bottles were very tasty. There is just no way to call any of these bad wines. As expected, the 1995 had weaker flavor yet no tannins. Moving forward in time, the flavor definitely improved, but the true Oroppas boldness did not show up until about 1998 or so. Around 2001 or 2002, a light shade of tannins started to arrive. The 2004 seemed to be the peak in the bottles that I had.
I've been a club member of St Clement for 10 or 15 years. Their whites, reds, and specialities like the Oroppas are all fabulous wines. Their winery is a big Victorian up on a hill; it is beautiful, with a wonderful view over Napa Valley. I'm definitely a fan, and this flight has been an awesome experience. I highly recommend their wines, and a visit to their wonderful property.
Cheers!
Sunday, March 04, 2012
Lots of Stuff
I spent a good amount of time in front of my television. Watching shows (cable and Netflix streaming) or playing video games. The TV occupies a large portion of my life. Good? Bad? Who knows. But that isn't the topic for today...
Lately, I've been fascinated watching the show Hoarders on A&E. The show is like a train wreck -- you just can't stop watching. It is a bit sad once you truly understand that hoarding is a psychological disorder, but it is so hard to stop watching. The things that people collect, that get hoarded, the condition of the house, etc ... there is always something new on the show.
But here is where the "Stuff" from my post title comes in. There are a number of shows on that focus on "stuff". Hoarders is one show, but TLC has a similar show named Hoarding: Buried Alive. The show title is a bit crazy, given that it actually happens.
It doesn't stop there. Those two shows are about people collecting. But there are shows on the other side of the equation, too. Hoarding is about acquiring, but the stuff (sometimes) needs to go away, too. Storage Wars is about lockers that get auctioned off when people don't pay their bill. In many cases, the lockers were owned by hoarders, and all kinds of awesome stuff is found in there.
And then you have the show, American Pickers. The show is extremely fascinating. Mike and Frank find lots of stuff in peoples' hoards, but they concentrate on old items, and the history behind them. As a big History Channel fan, American Pickers is an interesting lens into history. Old cars, music legends, bicycles, cars, and other memorabilia.
The stuff that people accumulate is incredibly fascinating, and these shows provide a broad view into history, people, and an endless variety of "stuff".
Lately, I've been fascinated watching the show Hoarders on A&E. The show is like a train wreck -- you just can't stop watching. It is a bit sad once you truly understand that hoarding is a psychological disorder, but it is so hard to stop watching. The things that people collect, that get hoarded, the condition of the house, etc ... there is always something new on the show.
But here is where the "Stuff" from my post title comes in. There are a number of shows on that focus on "stuff". Hoarders is one show, but TLC has a similar show named Hoarding: Buried Alive. The show title is a bit crazy, given that it actually happens.
It doesn't stop there. Those two shows are about people collecting. But there are shows on the other side of the equation, too. Hoarding is about acquiring, but the stuff (sometimes) needs to go away, too. Storage Wars is about lockers that get auctioned off when people don't pay their bill. In many cases, the lockers were owned by hoarders, and all kinds of awesome stuff is found in there.
And then you have the show, American Pickers. The show is extremely fascinating. Mike and Frank find lots of stuff in peoples' hoards, but they concentrate on old items, and the history behind them. As a big History Channel fan, American Pickers is an interesting lens into history. Old cars, music legends, bicycles, cars, and other memorabilia.
The stuff that people accumulate is incredibly fascinating, and these shows provide a broad view into history, people, and an endless variety of "stuff".
Monday, October 31, 2011
Installing Zabbix on Mac OS (Leopard)
My friend Sam Ruby dabbles in a lot of technology, and he tends to do writeups on his blog as he experiments with the stuff. I figured to take a page from his book, and share my own issues/troubles getting Zabbix up and running on my MacBook (running Leopard).
I grabbed the 1.8.8 tarball and unpacked it. For my scenario, I needed the server, the agent, and the frontend (but not the proxy). For simplicity in testing, and because I don't need to monitor bunches o' boxes, I decided to go with SQLite for the database. Zabbix uses the standard "configure/make/make install" pattern, so no hassle so far.
Burp. The compilation failed. Investigating, I found that I needed to apply the patch from ZBX-4085. The build completed, so started to look at the frontend.
The Frontend is written in PHP, which is natively available (along with Apache) on my laptop. With some configuration, I got the frontend to load in my browser. There is a click-through GPL license (huh?) and then a really awesome page that checks your setup. I quickly realized that the builtin PHP was not going to work. Sigh.
I've got MacPorts installed on my laptop, so I just continued with that. Homebrew is all the new rage with the kids, but it doesn't have builtin recipes for PHP. There are a few out on the 'net, but I really didn't want to monkey with that stuff.
Lots of packages were needed: php5, php5-gd, php5-mbstring, php5-sockets, php5-sqlite3, sqlite3. A hojillion dependencies were installed, including another copy of Apache (sigh).
Reloading the setup page, it continued to say SQLite wasn't installed. Looking at the frontend source, it was using a function named sqlite3_open(). With some investigation, I found an email describing the SQLite interfaces for PHP. Zabbix was using an unmaintained version. Rather than monkeying with that, I just edited the code to use the preferred PHP SQLite interface, and filed issue ZBX-4289 to push my changes upstream.
Finally, I needed to tweak /opt/local/etc/php5/php.ini for the recommended Zabbix settings (after copying php.ini-development to php.ini). This included some timezone settings, timeouts, upload sizes, etc. The Zabbix setup page is quite good about guiding you here.
So I created my initial SQLite .db file based on the instructions from the manual and pointed the Zabbix configuration page at it (taking a moment to realize it wanted the pathname put into the database field of the form). The test connection worked and then Zabbix saved the configuration file into frontends/php/conf/zabbix.conf.php. It looks like there is a "download" option for that configuration file, which I presume appears when the conf directory is not writeable. The Apache server (running from MacPorts now, using the MacPorts PHP) was running as myself, so it had no problem writing that configuration file.
Next up: wrestling with the zabbix-server. The first annoying problem was that you cannot give it a configuration file in the current directory. It fails trying to lock "." for some dumb reason. Solution: pass an absolute path to the custom configuration file (the default is in /etc or somesuch, which I didn't want to monkey with). Getting the server running was very frustrating because it spawns multiple processes which communicate using shared memory. It kept failing with errors about not being able to allocate the shared memory segments. After some research, I found that Mac OS defaults to some pretty small limits. Given that I wasn't about to reconfigure my kernel (using sysctl and some recipes I found on the web), I went to rejigger all the various cache sizes in the zabbix_server.conf file.
It ended up that I had to drop all the sizes to their minimum 128k setting: CacheSize, HistoryCacheSize, TrendCacheSize, HistoryTextCacheSize. Each were set to 131072. Finally, the server started. Whew.
When I returned to the frontend to "Finish" the installation and bring up the console... it hung. No response from the server. Huge sigh. With a bunch of investigation, I found that something was holding an exclusive lock on the whole damned SQLite file. Nothing else could write to it (and it seems the frontend likes to test its writability by creating/dropping a dummy table).
Fuck. Time to scrap the whole damned "simple SQLite" idea. Fine... I've used MySQL before, so I went with that. Back to MacPorts to install MySQL, the server, and the PHP driver for MySQL. Then I fired it up, created a "zabbix" user, loaded in all the tables, and zapped the zabbix.conf.php file to trigger reconfiguration (after noting to restart Apache to pick up the PHP changes).
The frontend looked happy now, so I tweaked the server's configuration file for MySQL and restarted the server. No workee. Damn. Forgot to reconfigure the server using --with-mysql=/opt/local/lib/mysql5/bin/mysql_config. After reconfiguring, the link failed with unsatisfied references to iconv(), iconv_open(), and iconv_close(). The MySQL interface in the server needs these for some UTF-8 conversions. The builtin Mac OS libiconv should work, but my MacPorts copy of libiconv was interfering, and these functions are named libiconv(), libiconv_open(), and libiconv_close(). My patience was ending, so I was not about to delve into autoconf bullshit and conditional compilation and all that. I simply edited src/libs/zbxcommon/str.c to call the libiconv* versions of the functions. The compile and link succeeded, and I re-installed the newly built server.
Yay! The server restarted, and the website loads up with a nifty little default console.
After a day to get this sucker installed, now I gotta start figuring out how to use it. Oh, joy.
I hope this post will help some future person treading these waters. Good luck!
ps. I may have missed some steps or packages to install or whatever. YMMV, but I think that I've got most of it down. Zabbix is supposed to be some hotness, and I do like its custom agent capability. But hoo-wee. Not a simple package to bring up (I hope it will be easier on a recent Ubuntu, than it was on my creaky Leopard install).
I grabbed the 1.8.8 tarball and unpacked it. For my scenario, I needed the server, the agent, and the frontend (but not the proxy). For simplicity in testing, and because I don't need to monitor bunches o' boxes, I decided to go with SQLite for the database. Zabbix uses the standard "configure/make/make install" pattern, so no hassle so far.
Burp. The compilation failed. Investigating, I found that I needed to apply the patch from ZBX-4085. The build completed, so started to look at the frontend.
The Frontend is written in PHP, which is natively available (along with Apache) on my laptop. With some configuration, I got the frontend to load in my browser. There is a click-through GPL license (huh?) and then a really awesome page that checks your setup. I quickly realized that the builtin PHP was not going to work. Sigh.
I've got MacPorts installed on my laptop, so I just continued with that. Homebrew is all the new rage with the kids, but it doesn't have builtin recipes for PHP. There are a few out on the 'net, but I really didn't want to monkey with that stuff.
Lots of packages were needed: php5, php5-gd, php5-mbstring, php5-sockets, php5-sqlite3, sqlite3. A hojillion dependencies were installed, including another copy of Apache (sigh).
Reloading the setup page, it continued to say SQLite wasn't installed. Looking at the frontend source, it was using a function named sqlite3_open(). With some investigation, I found an email describing the SQLite interfaces for PHP. Zabbix was using an unmaintained version. Rather than monkeying with that, I just edited the code to use the preferred PHP SQLite interface, and filed issue ZBX-4289 to push my changes upstream.
Finally, I needed to tweak /opt/local/etc/php5/php.ini for the recommended Zabbix settings (after copying php.ini-development to php.ini). This included some timezone settings, timeouts, upload sizes, etc. The Zabbix setup page is quite good about guiding you here.
So I created my initial SQLite .db file based on the instructions from the manual and pointed the Zabbix configuration page at it (taking a moment to realize it wanted the pathname put into the database field of the form). The test connection worked and then Zabbix saved the configuration file into frontends/php/conf/zabbix.conf.php. It looks like there is a "download" option for that configuration file, which I presume appears when the conf directory is not writeable. The Apache server (running from MacPorts now, using the MacPorts PHP) was running as myself, so it had no problem writing that configuration file.
Next up: wrestling with the zabbix-server. The first annoying problem was that you cannot give it a configuration file in the current directory. It fails trying to lock "." for some dumb reason. Solution: pass an absolute path to the custom configuration file (the default is in /etc or somesuch, which I didn't want to monkey with). Getting the server running was very frustrating because it spawns multiple processes which communicate using shared memory. It kept failing with errors about not being able to allocate the shared memory segments. After some research, I found that Mac OS defaults to some pretty small limits. Given that I wasn't about to reconfigure my kernel (using sysctl and some recipes I found on the web), I went to rejigger all the various cache sizes in the zabbix_server.conf file.
It ended up that I had to drop all the sizes to their minimum 128k setting: CacheSize, HistoryCacheSize, TrendCacheSize, HistoryTextCacheSize. Each were set to 131072. Finally, the server started. Whew.
When I returned to the frontend to "Finish" the installation and bring up the console... it hung. No response from the server. Huge sigh. With a bunch of investigation, I found that something was holding an exclusive lock on the whole damned SQLite file. Nothing else could write to it (and it seems the frontend likes to test its writability by creating/dropping a dummy table).
Fuck. Time to scrap the whole damned "simple SQLite" idea. Fine... I've used MySQL before, so I went with that. Back to MacPorts to install MySQL, the server, and the PHP driver for MySQL. Then I fired it up, created a "zabbix" user, loaded in all the tables, and zapped the zabbix.conf.php file to trigger reconfiguration (after noting to restart Apache to pick up the PHP changes).
The frontend looked happy now, so I tweaked the server's configuration file for MySQL and restarted the server. No workee. Damn. Forgot to reconfigure the server using --with-mysql=/opt/local/lib/mysql5/bin/mysql_config. After reconfiguring, the link failed with unsatisfied references to iconv(), iconv_open(), and iconv_close(). The MySQL interface in the server needs these for some UTF-8 conversions. The builtin Mac OS libiconv should work, but my MacPorts copy of libiconv was interfering, and these functions are named libiconv(), libiconv_open(), and libiconv_close(). My patience was ending, so I was not about to delve into autoconf bullshit and conditional compilation and all that. I simply edited src/libs/zbxcommon/str.c to call the libiconv* versions of the functions. The compile and link succeeded, and I re-installed the newly built server.
Yay! The server restarted, and the website loads up with a nifty little default console.
After a day to get this sucker installed, now I gotta start figuring out how to use it. Oh, joy.
I hope this post will help some future person treading these waters. Good luck!
ps. I may have missed some steps or packages to install or whatever. YMMV, but I think that I've got most of it down. Zabbix is supposed to be some hotness, and I do like its custom agent capability. But hoo-wee. Not a simple package to bring up (I hope it will be easier on a recent Ubuntu, than it was on my creaky Leopard install).
Monday, August 15, 2011
Blast from the past: removing the GIL
Way back in 1996, I created a patch to remove the GIL from Python's interpreter (version 1.4!). Dave Beazley just picked up the patch and tore it apart, and writing a fantastic blog post. It is quite nostalgic for me, from back in the day when I was working at Microsoft on their electronic commerce efforts.
[ I commented on Dave's post; it provides some context that you may also be interested in reading ]
[ I commented on Dave's post; it provides some context that you may also be interested in reading ]
Monday, November 29, 2010
Open Languages are Not Required
I just posted again to Apache Asserts on Computerworld UK: Open Languages are Not Required.
And please note that I'm speaking primarily to enterprise (internal) software developers, who are the vast majority of developers on the planet. They shouldn't really have to worry about the language that they use for their development. Having an open language is critical for us FLOSS developers, but that is an entirely separate discussion. (hat tip to webmink, to clarify my point here)
Note: the publish date is wrong (says last month); dunno what's up with that.
Update: corrected link after the publish date was fixed.
And please note that I'm speaking primarily to enterprise (internal) software developers, who are the vast majority of developers on the planet. They shouldn't really have to worry about the language that they use for their development. Having an open language is critical for us FLOSS developers, but that is an entirely separate discussion. (hat tip to webmink, to clarify my point here)
Note: the publish date is wrong (says last month); dunno what's up with that.
Update: corrected link after the publish date was fixed.
Friday, October 29, 2010
Are You An Open Source Friend?
The Apache Software Foundation was invited to find some people for Computerworld UK to write for a new blog named "Apache Asserts". Myself and a few others were selected to post our thoughts on open source, the enterprise, and whatever else we may find interesting.
My first post has been published... check it out!
My first post has been published... check it out!
Subscribe to:
Posts (Atom)