Wednesday, October 07, 2015

Value of ASF Projects

Matt Asay wrote an interesting piece last week, that took a rough stab at the "worth" of Open Source code under the care of the Linux Foundation. All the right caveats are there, of course: this isn't really the "worth" of the code, but an approximate cost in developer-years to produce that many lines of code. Fair enough, but when the number that pops out is $5 billion, that says something awesome. No matter how you may want to fiddle with the methodology, there are very few companies on the planet that can or have produced that much code.

Then he threw out the question: does the code under the umbrella of the Apache Software Foundation have that beat? It made me curious ...

I went to OpenHub and got its list of 340 Apache projects. For each project, I fetched the "lines of code" dataset used to produce a project's chart of LOC over time. After some edge case rejects, I had LOC for 332 projects at Apache, that OpenHub knows about. The result?

The ASF represents 177,229,680 lines of code, compared to Linux Foundation's 115 million.

So yes, by this crude measure, the ASF is "worth" something like $7.5 billion.

Talk amongst yourselves...

(obviously, I didn't use Wheeler's COCOMO model, but how far off could the value be on such a large/varied dataset? I think it's also interesting that the ASF provides a space for all this to happen with a budget of only about $1 million a year)

Sunday, September 20, 2015

GPASM object files

As part of the work on my home automation system, I've been doing a lot of assembly programming for the PIC16F688. That is my chosen microcontroller for all the various embedded systems around the house.

One of the particular issues that I've run into, is that I've divided the code into modules (like a good little boy). The gputils toolchain supports separate compilation, relocatable code, and linking. SWEET! But this is assembly code. I can't instantiate the I2C slave or master code for a particular pair of pins on the '688. There are tight timing loops, so the code must directly reference the correct port and pin (as opposed to variably-defined values).

One of my control boards talks to TWO I2C busses, and can operate as both slave and master on both busses. Since I must directly reference the port/pin, this means that I need separate compilations of the assembly code for each bus. And then I run into the problem: symbol conflict.

My solution is to rewrite symbols within the library modules for each bus instantiation. So the "start" function for the I2C master (I2C_M_start in the library's object file) is rewritten to HOUSE_I2C_M_start and LOCAL_I2C_M_start.

This works out really well, though I just ran into a problem where one library refers to another library. Not only do I need to rewrite the entrypoint symbols, but also the external reference symbols.

All of this rewriting is done with some Python code. The object files are COFF files, so I wrote a minimalist library to work with GPASM's object files (rather than generic COFF files). Using that library, I have a support script to add prefixes like HOUSE_ or LOCAL_.

Here are my support scripts:


    If you're dealing with PIC object files, then maybe the above scripts will be helpful.

    As an aside, I find it rather amusing to go back to assembly programming days, yet find myself still enmeshed within libraries, object files, and linkers.

    Saturday, August 22, 2015

    My Google Code projects have moved

    Back in March, Google announced that the project hosting service on Google Code was shutting down. I wrote a post about why/how we started the service. ... But that closure time has arrived.

    There are four projects on Google Code that I work on. Here is the disposition of each one:
    serf
    This has become Apache Serf, under the umbrella of the Apache Software Foundation. Justin and I started serf at Apache back in 2003. Two people are not sufficient for an Apache community, so we moved the project out of the ASF. We had a temporary location, but moved it to Google Code's project hosting at the service's launch, where it has resided for almost 10 years. The project now has a good community and is returning to its original home.
    (link to: old project site)

    pocore
    This is a portability library that I started, as a tighter replacement for APR. Haven't worked on it lately, but will get back to it, as I believe it is an interesting and needed library. I've moved it to GitHub.

    ezt
    This is a very old, very simple yet capable, and mature templating library that I wrote for Python. It is used in many places due to its simplicity and speed. Also moved to GitHub.

    gstein
    This is my personal repository for random non-project work. I open source everything, even if it might not be packaged perfectly for use. Somebody might find utility in a block of code, so I keep it all open. The code in this repository isn't part of a team effort, so I'm not interested in the tooling over at GitHub. I just want an svn repository to keep history, and to keep it offsite. For this repository, I've chosen Wildbit's beanstalk, and the repository has been published/opened.
    (link to: old project site)
    I'm sad to see Google Code go away, and I don't consider the above movements ideal. But it's the best I've got right now. Flow with the times...

    Saturday, March 14, 2015

    Sigh. Google Code project hosting closing down

    Google has just let us know that Google Code's project hosting will be shutting down.

    On a story over on Ars Technica, there were a lot of misconceptions about why Google chose to provide project hosting. I posted a long comment there, but want to repeat that here for posterity:

    As the Engineering Manager behind Google's project hosting's launch, I think some clarifications need to be made here. 
    In early 2005, SourceForge was not well-maintained, it was hard to use, and it was the only large hosting site available. Chris and I posed the following question, "what would happen if SourceForge went dark tomorrow". … F/OSS apocalypse. SF would take 10's of thousands of projects down with it. This wasn't too far-fetched, given the funding and team assigned to SourceForge.net at the time. Chris and I explored possibilities: provide Google operational support, machines, or just offer to buy it outright. … Our evaluation was: we didn't need to acquire SourceForge. We just needed to provide an alternative. Provide the community with another basket for their eggs. 
    Myself and three highly-talented engineers put together the project hosting from summer 2005, to its launch at OSCON in July 2006. We let SourceForge know late 2005 what we were doing, and they added staff. We couldn't have been happier! … we never set out to kill them. Just to provide safety against a potential catastrophic situation for the F/OSS community. 
    Did GitHub provide a better tool? I think so. But recall: that is their business. Google's interest was caretaking for the F/OSS community (much the same as the Google Summer of Code). The project hosting did that for TEN YEARS. 
    I'm biased, but call that a success. 
    There are many more hosting options today, compared to what the F/OSS ecosystem was dealing with in 2005 and 2006. I'm very sad to see it close down, but I can understand. Google contributes greatly to F/OSS, but what is the incremental value of their project hosting? Fantastic in 2006, but lower today. 
    … I hope the above helps to explain where/how Google Code's project hosting came about.

    Thursday, January 15, 2015

    Disappointing

    I've been reading Ars Technica for years. The bulk of what they do: I find awesome.

    A recent article used the phrase "Climate Denial" in its title. To me, in terms of the scientific method, there is no such thing as "denial", but simply "critical" or "questioning" or "not convinced". "Skeptical", if you will. All of these labels are fine, as they acknowledge that the hypothesis in question (AGW) is being tested. But "denial" has been used to shut down conversation, as if critical examination is no longer allowed.

    So I posted my thoughts, in the forum attached to that article, basically repeating the above.

    Ars Technica appears to have disliked my points about questioning. and that falsifiability is no longer applicable to AGW. So they closed my forum post, marking it as "trolling".

    The ridiculous thing is that somebody even replied to my post, pointing out "scientific consensus" on Wikipedia, yet that article specifically discusses that certain theories can never be proven. Only disproven (ref: falsifiability, above). So when you find a hypothesis in this pattern... the approach is to disprove.

    But nope. Ars Technica shut me down.

    I will still read you, Ars. I like your content. But when you shut down discussion? And call it trolling, despite some kind of rational basis, and an attempt at civil discussion?

    No. That is wrong, and I have lost respect for what you do.