Early impressions of Swift, and some workarounds

I’ve been playing around with Swift, the new programming language from Apple, for a few days and I’ve been quite happy with it. I’ve used many languages and development environments since I started coding when I was 15, so I was anxious to see what Swift offered.

I’ve by no means used all the features yet, though I’ve read about most of them in Apple’s online language guide and reference. I’m using XCode 6 Beta 6, so I expect that some of the gotchas and incomplete implementations will be addressed in the next beta or the final version. Even after that I would expect the language to evolve further since most do.

Some of the things I like:

  • Clean syntax
  • Fast compilation, when it works (see below)
  • Garbage collection
  • A nice attempt at bringing the power of older languages like C++ into a more modern form that includes some features resembling those in Python
  • A simple way to override syntactic operators like addition, negation, and multiplication.

My plan of attack here has been to take some C++ code I wrote 5 years ago and translate a subset of it to Swift. That way I can see how I would translate the ideas and structure into the new language. It’s been a good way to learn the language.

Some observations:

  • Passing by reference and passing by copy are clearly distinguished with less syntax than C++.
  • Because of the way it treats Unicode in a first class way, working with strings is more awkward than in many other languages. The need for Swift to coexist with Objective-C is also part of the reason, I believe.
  • The automatic garbage collection reduces code size over my previous manual methods.
  • I’m being more meticulous about when I can destructively change an object. (Almost never, and only close to where I create the object using specific init functions.)
  • I’m looking forward to a larger collection of standardized collection types. Here I would expect a huge improvement over the Standard C++ Library.
  • Once my core translated subset is complete and working well, I’ll look at using more of the idiomatic features of Swift and optimizing the code.

While working in the XCode editing environment, I hit a point where the computer CPU usage shot up to close to 100% for SourceKit, the underlying code base that handles all the editing, syntax checking, and code issue finding. Editing slowed to a crawl and sometimes I would get a message that SourceKit had crashed. Compilation took many minutes but the execution was correct.

I looked around the web, especially on StackOverflow, and found other mentions of the behavior but no great solutions and no problem situation that matched mine exactly. Eventually I went old school: I commented out most of my code and selectively added it back in until I could isolate the offending lines. Note that this was not a runtime error but an environmental problem while editing. That is: not my real problem.

If I had done something syntactically wrong, the editor or the compiler should have told me and not sucked up all the resources on my computer. If I was not doing something wrong, I should have seen no slow down.

Eventually I found that the offending code was

var s : Int = (u.bigits[j] * b + u.bigits[j-1] - q * v.bigits[n-1]) * b + u.bigits[j-2]

There is nothing wrong with the code except perhaps its complexity. I broke the statement into several simpler ones. By the way, I know I could have used “+=" but I wanted to be explicit and mirror the original statement.

var s : Int = -q * v.bigits[n-1]
s = s + u.bigits[j-1]
s = s + u.bigits[j] * b
s = s * b
s = s + u.bigits[j-2]

The problem went away. Editing and compilation speed returned to normal.

So the moral of this is that when working with betas of new languages, expect a few glitches and work around them. In the next release of XCode I’ll try my original statement and see if it has been fixed.

I’m looking forward to trying the new generics and constraints features. Though they look new to Swift, the ideas go back to at least the early 1990s.

Two other tidbits:

  1. If you know the type of an object, it does not hurt to be explicit in stating it. While it could be inferred, stating it makes the code more self-documenting.
  2. This release does not like spaces between unary prefix operators and their operands. That is, “- x” is flagged while “-x” is not. We’ll eventually see if this is a bug or a feature.

Virtual Life with Linux: Standalone OpenSim on Ubuntu 13.10

This post is a great example of why you should never say that you are starting a new series of blog entries. In February of 2010, I wrote a blog post called Virtual Life with Linux: Standalone opensim on Ubuntu 9.10 saying

As a complement to my Life with Linux blog series, I’m introducing another series which explores what I can do in virtual worlds and immersive Internet environments on Linux.

I wrote two entries, and that was it. Well, here is the third entry, notes from trying to install the latest version of OpenSim on Ubuntu Linux 13.10. I’m not going to go through all the steps involved, but mostly talk about some of the glitches I encountered and how I resolved them.

First, some notes on Ubuntu 13.10. I have a dual boot pc with Windows 7 and Ubuntu on it. I used to do a lot with Linux because it was my job and also because I loved the experience of trying all the distros, seeing what was new, and playing with the features. Well, I moved on to a job involving mobile and then running the math department in IBM Research, and I really did not touch Linux for a long time. Long as in the version of Ubuntu on my machine being from 2009.

I fired this up several weeks ago and started the upgrade process, which was excruciatingly slow. Somewhere in there I accidentally hit the power button on the computer and that pretty much wiped out the Ubuntu image. Don’t do that. I eventually burned a DVD of Ubuntu 13.10. Once again the updates were really slow.

This weekend I did the clever thing and did a web search for “slow Ubuntu updates.” The main suggestion was that I find a mirror closer to me, and this made a huge difference. I went into the Ubuntu Software Center, picked Edit | Software Sources, went into Download From, picked Other…, and found a mirror 40 miles from my house. Problem solved.

32 bit Libraries

I installed the 64 bit version of Ubuntu but you are going to need the 32 bit libraries. There’s a lot on the web about how to do this for older versions of Ubuntu, how you should use multiarch libraries, how you don’t need to do anything at all, and so on. Eventually I found this solution and it worked, from the forums for the Firestorm virtual world viewer. There are other ways to accomplish the same thing, but this does the job.

sudo apt-get install libgtk2.0-0:i386 libpangox-1.0-0:i386 libpangoxft-1.0-0:i386 libidn11:i386 libglu1-mesa:i386

sudo apt-get install gstreamer0.10-pulseaudio:i386

Mono

You need the complete mono package, not just what you install from the Ubuntu Software Center.

sudo apt-get install mono-complete

See the OpenSim build instructions for other platforms.

MySQL

Install the client and the server from the Ubuntu Software Center. You will be asked for a root password, so write it down somewhere.

Getting OpenSim

There are several ways of getting and installing OpenSim. When I last did this four years ago, I took a “from scratch” approach but I’m doing it more simply now. I used the popular Diva Distribution of OpenSim which comes set up for a 2×2 megaregion (that is 4 regions in a square that behave like one great big region). What you lose in some flexibility you gain in ease of installation and update. Once you download and expand the files, start reading the README.txt file and then the INSTALL.txt file. Other files will tell you more about MySQL and mono, but you did the hard work above.

Since I am not connecting this world to the Internet, I did not bother with the DNS name, I simply used localhost at 127.0.0.1.

Follow the instructions for configuring OpenSim and getting it started. You’ll need to give names for the four regions, which I’ll call R1, R2, R3, and R4. These are laid out in the following tile pattern:

R2
(northwest)
R4
(northeast)
R1
(southwest)
R3
(southeast)

You will need to know this if you decide to change the terrains for your world.

For example, suppose you had four terrain files called nw.raw, ne.raw, sw.raw, and se.raw in the terrains subdirectory of your OpenSim bin directory.

Then you would issue the following from within the OpenSim console to set the terrains for the regions:

change region R1
terrain load terrains/sw.raw
change region R2
terrain load terrains/nw.raw
change region R3
terrain load terrains/se.raw
change region R4
terrain load terrains/ne.raw

A web search will find you many options for terrains. Basically, they are elevation files for your region.

Getting a Browser

I believe that all the popular common browsers out there for OpenSim are evolutions of some major versions of the Second Life browser after they were open sourced. This OpenSim page has details on your options. If you have a choice, get a 64 bit browser if you are using a 64 bit Linux. I’ve had good luck with both Firestorm and Kokua.

Other Approaches

Maria Korolov extensively describes the different ways of getting an OpenSim region up and running in her article OpenSim 102: Running your own sims. In particular, she discusses New World Studio, and I’ll be trying to get that running on my MacBook.

Second Life and OpenSim revisited

Several years ago I spent quite a bit of time in Second Life when it was the hot 3D social world. The promise was that you could build and visit worlds that had been uniquely constructed by the users. As such, it was dynamic environment that tended to be slow as all the shapes, buildings, and textures were loaded.

Second Life alter egoPeople can customize their in-world presences extensively, from body shape to the clothes and decorations worn. Indeed, you don’t even need to look like a person. Note, however, that you probably should not show up to a business meeting in Second Life in the form of a squirrel, as my now-retired IBM colleague Irving Wladawsky-Berger once said.

Over time, Second Life fell out of fashion as a world where businesses could set up sites where clients or interested people could visit, learn about products or services, and talk to real people, albeit in avatar form.

For internal business meetings, the lack of truly secure conversation was a problem. We used teleconferences for the voice, and Second Life for the environment. As meetings went on, participants often went inactive, or fell asleep, in Second Life, and we were back to phone meetings as usual.

Second Life lives on today as a social world. That’s never been much of an interest to me, but to each his or her own. It seems to be quite vibrant across a broad range of what “social” means.

My interest in it was always more in the construction aspects, and I’ve written extensively about the techniques involved. See Building in Second Life, By Example. I still get many links to this site from people looking to build moving doors, for example. I also had a long series of blog entries about how to do things in Second Life called My Second Life. Note that this is from 2006, so it is getting a bit old.

You can see all my writings on Second Life by going to the top of this page and entering “second life” in the search box on the right side.

Here is the net for me with Second Life: it is too expensive to be as slow as it is, especially if I only want to use it as an advanced 3D building environment. While new ways of building objects have been introduced, it’s hard to see a lot of difference from the way it was five years ago. I still visit from time to time, but I own no land and spend no money there.

OpenSimulator, or opensim for short, is a reimplementation of the Second Life server in open source. It is written in C#, so requires Microsoft Windows or the Mono environment on Linux. It does not include a browser, but several are available.

Other than the OpenSim site itself, the best source of information about the technology and the worlds built with it is Maria Korolov’s Hypergrid Business. It is excellent.

Some of the features of OpenSim include:

  • an active development community
  • better in-world programming options
  • the ability to host a world on your own computer, which is completely free
  • many online paid hosting options
  • the ability to connect your world to several choices of “grids,” or collections of worlds
  • teleporting from one world to another across a grid

This means that I could set up a world on my local computer, do all the building I want on it, save an image, and then transfer it to a hosted server. If you can and want to connect your computer to the Internet, you can host your world from there and have others visit it.

To see a modern use of OpenSim, read the article $250,000 project models cities in OpenSim.

Some of the potential downsides are:

  • hosting providers come and go, though some have been around for years
  • it may be more difficult to find assets you need at the quality you want, for example textures, but there are guides for finding free content
  • it is probably best if you have some technical chops or know someone who does

So Second Life costs money to own land and to buy some assets, and is more restrictive. OpenSim and the worlds and grids associated with it provide more freedom, but you are more on your own and there might be some long term risks related to hosting. For me, the freedom is worth the risk.

In 2010 I wrote a blog entry called Virtual Life with Linux: Standalone OpenSim on Ubuntu 9.10. I’ve recently gone through the experience of doing this on Ubuntu 13.10. I’ve published some notes on what I did this time to install on my pc in Virtual Life with Linux: Standalone OpenSim on Ubuntu 13.10.

Map and Elevation Data

For a small personal project I’m starting, I wanted to get elevation data for the area surrounding our property in upstate New York. A quick web search yielded the The National Map website, a service of the US Geological Survey.

NY Elevation MapThe information and products on the site are extensive, but for my purposes I followed the link to The National Map Viewer and Download Platform. From there I zoomed down to the area of our house, and started looking at what was available. After several experiments, I decided to download a portion of the National Elevation Dataset at 1 arc second resolution. The 1/3 arc version was also available, but, as expected, was 9 times bigger.

The readme.pdf file starts with the following:

The U.S. Geological Survey has developed the National Elevation Dataset (NED). The NED is a seamless mosaic of best-available elevation data drawn from a variety of sources. While much of the NED is derived from USGS Digital Elevation Models (DEM’s) in the 7.5-minute series, increasingly large areas are being obtained from active remote sensing technologies, such as LIDAR and IFSAR, and also by digital photogrammetric processes. Efficient processing methods were developed to filter production artifacts in the source data, convert to the NAD83 datum, edge-match, and fill slivers of missing data at quadrangle seams. NED is available in spatial resolutions of 1 arc-second (roughly 30 meters), 1/3 arcsecond (roughly 10 meters), and 1/9 arc-second (roughly 3 meters). The dataset is updated with “best available” elevation data on a two month cycle.

These digital elevation datasets are essential in understanding the Earth’s landscape: elevation, slope, and aspect (direction a slope faces.) NED is critical to identifying and modeling geologic features such as water drainage channels and basins, watersheds, peaks and pits, and movements such as avalanches. NED is used to create relief maps, 3-D visualizations, to classify land cover and to geometrically correct data from satellite or aircraft sensors (orthorectification). The fire community, natural resource managers, urban planners, conservationist, emergency responders, communication companies to name a few all rely on these elevation datasets. This data also supports The National Map.

Now I have to figure out how to process the file, which I’ll do by looking at the data dictionary elsewhere on the site and writing some Python code.

Update: Even though I zoomed down to a rectangular area less than one block on a side, the downloaded data contains a 1 second by 1 second square of elevation data.That’s more data than I was expecting and I’ll have to pull out a subset.

My annotated programming language history

I’ve been coding, a.k.a programming, since I was 15 years old. Since then I’ve used many programming languages. Some of them have been for work, some have been for fun. I mean, really, who hasn’t done some programing while on vacation?

Somewhat chronologically, here are many of the languages I’ve used with some comments on my experience with them. In total I’ve written millions of lines of code in the various languages over four decades.

Basic: This is the first language I used. While primitive, I was able to write some long programs such as a Monopoly game. In between coding sessions, I saved my work on yellow paper tape. I fiddled with Visual Basic years later, but I never wrote anything substantive in it.

APL: Now we’re talking a serious language, and this is still in use today, particularly by one statistician in my group at IBM Research. I was editor of the school newspaper when I was a senior in high school and I wrote a primitive word processor in APL that would justify the text. It sure beat using a typewriter. Some modern programming languages and environments like R and MatLab owe a lot to APL. They should mention that more.

FORTRAN: My first use of this language was for traffic simulations and I used a DYNAMO implementation in FORTRAN in a course I took one summer at the Polytechnic Institute of New York in Brooklyn. Forget interactive code editing, we used punch cards! FORTRAN was created at IBM Research, by the way.

PDP 11 Assembler: I only took one Computer Science class in college and this was the language used. Evidently the course alternated between using Lisp and Assembler and the primary language in which the students wrote. However, our big project was to write a Lisp interpreter in Assembler which got me hooked on ideas like garbage collection. No, I did not and do not mind the parentheses.

csh, bash, and the like: These are the shell scripting languages for UNIX, Linux, and the Mac. I’ve used them on and off for several decades. They are very powerful, but I can never remember the syntax, which I need to look up every time.

Perl: Extraordinary, powerful, write once and hope you can figure it out later. Just not for me.

PL/I: Classic IBM mainframe language and it saved me from ever learning COBOL. When I was a summer student with IBM during my college years, we used PL/I to write applications for optimizing IBM’s bulk purchases of telecommunications capacity for voice and data. It was basically one big queuing theory problem with huge amounts of data. It was big data, 70s style.

Rexx: This language represented a real change in the way I viewed languages on the mainframe. Rather than being obviously descended from the punch card days, it was a modern language that allowed you to imagine data in more than a line-by-line mode, and help you think of patterns within the data. It was much easier to use than than the compiled languages I had used earlier. My primary use for it was in writing macros for the XEDIT editor.

Turbo PASCAL: This was my main programming language on my IBM PC in the 1980s. The editor was built-in and the compiler was very fast. I used it to write an interactive editor like XEDIT for the mainframe with it, as well as a Scheme interpreter.

Scheme: A very nice and elegant descendant of Lisp, was considered an important programming language for teaching Computer Science. That role has been largely usurped by Java. I liked writing interpreters in Scheme but I never did much actual coding in it.

VM Lisp: This was a Lisp dialect developed at IBM Research for mainframes. My group led by Dick Jenks there used it as the bottommost implementation language for computer algebra systems like Scratchpad, Scratchpad II, and Axiom. Like other Lisps this had two very important features: automatic garbage collection and bignums, also known as arbitrarily large integers.

Boot: An internal language at IBM Research built on Lisp that provided feature like collections and pattern matching for complex assignments. It had many advantages over Lisp and inherited the garbage collection and bignums. From time to time I and others would rewrite parts of Boot to get more efficient code generation, but the parser was very hard to tinker with.

Axiom compiler and interpreter languages: The IBM Research team developed these to express and compute with very sophisticated type hierarchies and algorithms, typical of how mathematics itself is really done. So the Axiom notion of “category” corresponded to that in mathematics, and one algorithm could be conditionally chosen over another at runtime based on categorical properties of the computational domains. This work preceded some later language features that have shown up in Ruby and Sage. The interpreted language was weakly typed in that it tried to figure out what you meant mathematically. So x + 1/2 would produce an object of type Polynomial RationalNumber. While the type interpretation was pretty impressive, the speed and ease of use never made the system as popular as other math systems like Maple or Mathematica.

awk: Great language for regular expressions and sophisticated text processing. I wrote a lot of awk for pre- and post-processing the Axiom book.

C: Better than assembler, great for really understanding how code translates to execution and how it could get optimized. Happy to move on to C++.

C++: Yay, objects. I started using C++ when I wrote techexplorer for displaying live TEX and LATEX documents. I used the type system extensively, though I’ve always strongly disliked the use of templates. Several years ago I wrote a small toy computer algebra system in C++ and had to implement bignums. While there are several such libraries available in open source for C and C++, none of them met my tastes or open source license preferences. Coding in C++ was my first experience with Microsoft Visual Studio in the 1990s. The C++ standard library is simply not as easy to use as the built-in collection types in Python, see below.

SmallTalk: Nope, but largely because I disliked the programming environments. The design of the language taught me a lot about object orientation.

Java: This is obviously an important language, but I don’t use it for my personal coding, which is sporadic. If I used it all day long and could keep the syntax and library organization in my head, that would be another story. I would be very hesitant to write the key elements of a server-side networked application in something other than Java due to security concerns (that is, Java is good).

Ruby: Nope. Installed many times, but it just doesn’t make me want to write huge applications in it.

PHP: The implementation language for WordPress and Drupal, in addition to many other web applications. If you want to spit out HTML, this is the way to do it. I’m not in love with its object features, but the other programming elements are more than good enough to munch on a lot of data and make it presentable.

Objective-C: Welcome to the all Apple world, practically speaking. It hurts my head, but it is really powerful and Apple has provided a gorgeous and powerful library to build Mac and iOS mobile apps. My life improved when I discovered that I could write the algorithmic parts of an app in C++ and then only use Objective-C for the user interface and some library access.

Python: This is my all time favorite language. It’s got bignums, it’s got garbage collection, it’s got lists and hash tables, it can be procedural, object-oriented, or functional. I can code and debug faster than any other language I’ve used. Two huge improvements would be 1) make it much easier to create web applications with it other than using frameworks like Django, and 2) have Apple, Google, and Microsoft make it a first class language for mobile app development.

Javascript: This has been on my todo list for years and I’ve written a few dozen lines here and there for some web pages. To me, the object system is strange, but I need to get over it. Of the languages that are out there now, this is probably the most important one missing from my coding arsenal and represents an intellectual deficiency on my part.

Introducing PureSystems, IBM’s expert integrated systems family

ibm logoToday IBM introduced the PureSystems family to simultaneously simplify yet make more powerful the hardware and software that organizations use to power their datacenters, cloud, and other computing environments.

From the press release:

With the introduction of the new PureSystems family, IBM is unveiling three major advances that point to a new era of computing technology that is designed to allow businesses to slash the high costs and nagging complexity associated with managing information technology.

  • “Scale-In” System Design: With PureSystems, IBM is introducing a new concept in system design that integrates the server, storage, and networking into a highly automated, simple-to-manage machine. Scale-in design provides for increased density – PureSystems can handle twice as many applications compared to some IBM systems, doubling the computing power per square foot of data center space.
  • Patterns of Expertise: For the first time, IBM is embedding technology and industry expertise through first-of-a-kind software that allows the systems to automatically handle basic, time-consuming tasks such as configuration, upgrades, and application requirements.
  • Cloud Ready integration: Out of the box, all PureSystems family members are built for the cloud, enabling corporations to quickly create private, self-service cloud offerings that can scale up and down automatically.

What this means is the hardware is tightly integrated and easier to configure and maintain. The software patterns complement the hardware and accelerate the use of the systems for the types of workloads that customers really deploy. Finally, since the systems are cloud-ready, PureSystem installations can span use cases from traditional datacenters to private clouds.

This represents a US $2 Billion R&D investment by IBM. Personally, it’s been fascinating watching the pieces come together and the different parts of IBM working to create this new family of products. It’s exciting to finally be able to talk about it!

IBM recently celebrated its 100 year anniversary and talked about the many significant computing innovations it introduced during its first century. I suspect I won’t be around to see the corresponding version for the second hundred years, but I’m very confident that today’s PureSystems introduction will be front and center.

The PureSystems team producing this snazzy infographic to sum up why customers need these new systems.

IBM PureSystems Infographic - IT Headaches

Also see: “Daily links for 04/11/2012 – IBM PureSystems Edition”

Something new, something (big) blue: IBM WebSphere Application Server V8.5 Alpha

While this post definitely falls into the category of “a word from my sponsor,” I hope you’ll take a look at the software being discussed if you have at all been involved with Java and web application servers.

wasdev banner

One of the most fun parts of being in the software world is being able to get your code into the hands of developers. While you can have great big product releases with much fanfare, other times there are smaller alpha and beta drops that can surprise you if you take the time to look at them. This is one of those latter instances.

If I’m developing code, I’m not going to get it right the first time. I’ll need to fix bugs but I’ll also need to progressively add features. This means that I’ll be editing, starting up the environment, testing, tweaking, debugging, over and over again. My environment and tools need to make this fast and easy for me. When I’m done coding and testing, I need to know that what I produce will run in a production quality environment with the right security, performance, availability and other qualities of service. I need a web application environment, both runtime and tools, that gives me all this.

IBM has just released the WebSphere Application Server V8.5 Alpha. First of all, this is a shiny new thing that developers, particularly Java developers should check out. Within this is something new and different tha we’re calling the Liberty Profile. The website describes what you get with this:

The WebSphere Application Server V8.5 Alpha delivers a simplified and lightweight runtime for web applications. Incredibly fast restart times coupled with its small size and ease of use make V8.5 a great option for Developers building web applications that don’t require the full JEE environment of traditional enterprise application server profiles. Highlights of the WebSphere Application Server V8.5 Alpha include:

  • Free and frictionless download for development purposes
  • Ultra lightweight modular runtime with an install size of under 50 MB
  • Incredibly fast startup times of under 5 seconds
  • Simplified configuration for quick time to productivity
  • WebSphere Developer Tools available as Eclipse plug-ins

To get started, download the server and/or the tools.

You can learn more via articles, videos, podcasts, and samples.

We have a blog where you can learn what the IBM developers are doing with WebSphere and Eclipse. In particular, check out Ian Robinson’s entry on “Introducing the Liberty Profile.”

Finally, and this one is really important, join the community and participate in the discussions.

Sometimes products are just small evolutionary changes from what was there before. This represents something profoundly different. In my opinion, and I am far from partial, it is worth a look.

IBM WebSphere Developer Technical Journal – June, 2011

The IBM WebSphere Developer Technical Journal is a great resource for the latest technical news, advice, and details about what’s happening within the WebSphere line of products. Yes, this is kind of a message from my sponsor, but there is no buy button. Don’t tell sales.

WebSphere graphic imageOne of the things that I’m doing now that I’m back here in IBM WebSphere is looking around at the resources that are available for the products in my portfolio. There’s quite a bit between the product pages, as you would expect, but also developerWorks. The articles, forums and blogs on developerWorks provide significant resources for those using all IBM products, not just WebSphere. That said, they do have a large section on WebSphere itself.

From time to time I’ll put up some pointers to WebSphere resources. Today I’ll start with the WebSphere Developer Technical Journal. It’s available to be read online, in PDF form, or on your Kindle.

Here are a few articles in the June edition:

If you wish, you can download this entire issue in PDF format. I download such documents and then use DropBox to read them on my iPad.

10 things to think about to improve software product descriptions

I’ve been back in a software product area since the beginning of June, and I’ve been spending a lot of time looking at product descriptions and literature. Not just IBM‘s, mind you, but those of our competitors as well. This includes traditional, commercial “proprietary” software and commercial open source software.

Some of the descriptions of products in the industry are quite good, but many are pretty bad. They seem to range from “this is so high level that you have no idea what the product does” to “this has a long list of technical details that we hope impresses you even though you might not know how they could possibly help your business.”

I know, I know, different descriptions for different audiences. What you say to someone in development or the CTO should probably be different from what you say to the CIO and almost certainly different from what you say to the CFO. However, when there is only one, everyone suffers.

You need to know who your audience is (“segmentation”) and then shape what you say. Explicitly address your different audiences. It’s ok to say right at the beginning of each paragraph to whom you are speaking.

Here are a few suggestions, written from the perspective of a customer.

  1. First and foremost, the goal in acquiring software is to accomplish something. Tell me if your product will help me do that. This might be a simple yes or no.
  2. If I am a developer, tell me how easily your product will let me do what I wish and how it will make my life simpler and more productive. This new ease is in comparison to the previous version of your product as well as offerings from your competitors. Don’t overdo it on cute statements like “we make developers happy.”
  3. Match new or improved technical features to business value. “By doubling the amount of memory your application can use, you can now serve 25% more customers in the same amount of time and increase your revenue.”
  4. Regarding business value, stating how your software can help increase revenue (as above), improve security, increase availability, improve customer loyalty, decrease maintenance costs, and simplify integration with other parts of the business are all good things. If your software will help do none of these, why would I possibly install it?
  5. Don’t be overly simplistic about TCO (total cost of ownership) and TCA (total cost of acquisition). I can add up transactional, service, and support costs over 5 years as well as you can.
  6. Do, however, give me a way to compute the real return on investment from your software. Even if your TCA is $0, I may need to pay my people, your people, or a services integrator money to make it work for me. Give me examples based on real customers if possible.
  7. If I read your website and after 5 minutes I still don’t have the vaguest idea what your product does or why I might want to install it, you’ve failed. Start over.
  8. Separate promises of future functionality and value from what you can do right now. I’m interested in your roadmap, but I have problems to solve right now. Do not imply you can do more today than you can.
  9. Use graphics well to convey what your software does and the value it gives me. Don’t think that adding more tiny boxes with tinier print in them improves things. You are educating me about your offerings so I can make an intelligent and well informed decision. You are helping me make the case for acquiring your software within my organization.
  10. For emphasis: tell me how your software will make my organization better, more efficient, and more profitable, and how I can serve my customers better. If it will lead to great personal success for me, so much the better!

 

European WebSphere Technical Conference in Berlin

IBM has announced the European WebSphere Technical Conference for 2011. The conference will be held from October 10th to the 14th in Berlin, Germany. From the website:

The 2011 IBM European WebSphere Technical Conference, which combines the WebSphere and Transaction & Messaging Conferences of the previous years into one seamless agenda, is a 4.5 day event held 10-14 October 2011 in Berlin, Germany.

This conference has earned the reputation for delivering deep technical content targeted at architects, developers, integrators and administrators by offering lectures and hands-on labs that focus on the best practices and practical skills required to run today’s enterprises. This year will be no exception!

Attend the WebSphere Technical Conference and expand your knowledge of SOA, CICS, Messaging, WebSphere Application Servers and Infrastructure, including a focus on BPM and Cloud Computing. You can also expect to gain insight into IBM’s software strategy and learn about the latest development directions for the products in the WebSphere software platform.

New position within IBM

It’s been effective for a week, so I guess I can spill the beans here and say that I’ve shifted to a new executive position within IBM, namely to be the Vice President for WebSphere Foundation Product Management in the Software Group. I’ll have more to say about this over time, but basically it means that my team works with development, sales and marketing to drive the WebSphere Application server line and products like WebSphere eXtreme Scale. These are significant unto themselves but also underlie some of the most important software products that IBM sells. That’s not a totally inclusive list, but you get the idea.

Obviously we’re not just concerned about what we have already but also will be driving the plans for new products and the next generation of current ones that fit within that “foundation” area of the stack of IT software. Stay tuned.

Some of you might ask “didn’t you sort of do something similar about 6 or 7 years ago?”. Yes and no, sort of.

When I was last here in 2003-4, the world was just figuring out the commercial benefits of applying XML to business problems and web services was pretty new. There were several open source app servers and Oracle had not yet bought BEA and Sun. We were about to enter into the SOA era that led us to the current cloud era. Also, I had a marketing position, something I had never done before. This role is more of a blend of the business and the technical.

I learned a lot during that time but the IT world has evolved significantly, as have our products. We’re all right on the cusp of doing even more wonderful things with this core technology we as an industry have developed, so it’s a great time to move back and help drive it from the inside.

What does this mean for the blog?

  • I will not use it as marketing vehicle for products, though I may provide links to things I think of interest.
  • I’ll still talk about all those extraneous topics like gardening, sailing, cooking, and not playing the guitar well.
  • The discussion of standards will probably increase again.
  • I’ll keep talking about Linux and providing links to interesting articles, but more from a user or enterprise consumer perspective.
  • The amount I’ve said about open source lately has decreased primarily because I’ve largely exhausted many of the discussion areas that interest me, and I don’t like repeating myself. There will still be some content about open source, but it will be at about the same level it’s been for the last six months.
  • I’ll be ramping up the discussion of Java and other languages, programming frameworks, tools, cloud, mobile, runtime considerations, and application integration. Much of this has been present from time to time, but will increase.

Thinking about restaurant software and online services

I predict that in the future more restaurants will be managed via online services that not only help with the accounting of revenue and expenses, but assist in predicting what menu items will do well for a given profit.

Two weeks ago I posted a blog entry about the basic ideas behind predictive analytics. One of the examples I used was a restaurant on Main Street in the college town in which I live. I wondered how data about the weather and the college calendar might be used to predict sales and therefore help in setting menus and ordering supplies.

Every once in a while I look at the hits I get on the site and noticed that someone had landed on that blog entry looking for “restaurant predictive analytics” in Google. I just repeated that search and my entry was number #2 in the listing. That’s nice, but I didn’t actually say how to do the predictions based on the data analysis.

That got me wondering about software for restaurants in general. It’s a Sunday afternoon so I poked around the web a bit. A good place to start to see what is out there is RestaurantSoftware.com. Much of what is there has to do with POS (point of sale), staff scheduling, and accounting. It makes a big difference whether you are running a small restaurant or a chain of a few hundred locations in determining what software you need, want, and can afford.

In theory what you want is a great big model of the restaurant taking into account all the expenses and all the money generated. The idea is to generate a profit, and probably the bigger the better. If you own the restaurant, you need to pay yourself as well.

An interesting article to read is “How to Price Your Restaurant Menu” by Lorri Mealey at About.com. This is a good place to start because for a restaurant, food and drinks are your primary revenue makers and you better get that pricing right. Mealey states that in general the cost of the food ingredients should be about 30-35% of the price of the dish on the menu.

Let me assume 33% and we’ll look at this two ways:

  1. If the total food cost is $1 then the menu price should be $1 / .33 = $3.03. You might round this down to $2.99 but you’ll make more money if you round it up to $3.25.
  2. On the other hand, if you know a dish should cost no more than $10 because of local prices (e.g., the restaurant across the street), then your cost of ingredients should be no more that $3.30. If your food cost is more, you lose money, if you can spend a bit less, then you can make money.

This is Business 101: you make more profit if you charge the most you can and spend the least you can. Note that you don’t have to be 100% capitalist about this as you can donate part of your profits to local charities or you can voluntarily increase your expenses to pay your workers better wages or provide better benefits.

For a restaurant, the expenses are not just for food. You have to factor in what you are paying for your people, the expense associated with customers paying by credit card, the rent or mortgage on the property, taxes, water, heat, gas, electricity, non-food supplies, IT equipment and services, and any payments you make to service other debt you incurred to start the restaurant. You may have done some construction, installed new furniture, upgraded the kitchen, etc.

So when all is said and done, if you charge too little for the food you sell, you will not be able to pay all your expenses, and you may eventually go out of business. If you run the business well, charge the right amount, drive enough traffic through the restaurant, you should do well. Good luck.

This blog entry is supposed to be about software, so the key question is what software can you run to help you with all the above? Note you can do this by hand or with a spreadsheet, though it might be a lot of work.

For many restaurants the center of the IT infrastructure is the POS, or point of sale, terminal. Basically, this is a fancy electronic cash register that is connected to a back office database. It allows the cashier or wait staff to enter exactly what was ordered and what was paid. Whatever else it does, it lets you know how much food of what type was sold and how much money you made. (It can also help you understand, for example, the relative sales efficiency of different members of your wait staff.)

Given that you know how much money you made on a given day, you need to connect that to how much you spent on food inventory. Did you buy too much of the wrong ingredients because people did not order those menu items? Could you have made a lot more money on a highly profitable special if you hadn’t run out of the ingredients? Should you have known that on days where the temperature is below freezing that customers usually buy more chili?

So looking at the complete picture of the software you might want, you not only need something that looks at all your expenses and revenue and helps you manage those, but you then also want to mine this data to make smart decisions for the future.

The system should tell you what menu items to drop because they aren’t selling or are unprofitable, rather just a hunch by someone on your staff. The system should tell you what menu items are likely to sell more on what days and how many supplies to order to handle the expected demand.

If this software (and the hardware on which to run it) is too expensive, you might be better off with the spreadsheets and the guessing. I think the future here will be similar to what is happening elsewhere: more and more restaurants will be using online services where all transactions are done through a browser. The complete service will do everything I spoke of above, handle payroll and accounting, help in ordering inventory, assist in making menus, and schedule your employees.

This assumes that you have an internet connection, of course, and that the cost of the service matches your budget. Indeed, the service should help you manage all your expenses, including that of the service itself.

P.S. Don’t even think about starting a restaurant until you have read Anthony Bourdain’s Kitchen Confidential Updated Edition: Adventures in the Culinary Underbelly. You are not a good restauranteur candidate because you like to cook for friends.

P.P.S. When I was a teenager I worked bussing tables and then as a short order cook at Maggie Muffet’s Country Kitchen in Carmel, NY. That restaurant is long gone, which is exactly the way it should be.

Predictive analytics, the basic idea

If you’ve ever bought an insurance policy, you have seen the output of predictive analytics.

If you’ve ever wondered how a restaurant knows how many “specials” to prep for, you’ve thought about predictive analytics, though in practice it was probably informal.

Analytics means munching on a whole lot of data, sorting it out, deciding what is important, and then understanding what it tells you. The result can give you a view of what happened in the past and give you a dashboard of the situation right now. This is usually what is called business analytics. If you are interested in using the information to try to guess what will happen in the future, that is predictive analytics.

Sounds simple, right? Ready to predict the weather or the stock market?

Predictive analytics is not about guaranteeing a certain result, but instead about giving you an idea of what is likely to happen, with a statement of how accurate the prediction will probably be.

What can get in the way of making sound predictions?

  • Bad data
  • Irrelevant data for what you are trying to model
  • More generally, too much or little information that is weighted in the wrong way
  • Random events
  • The wrong model for prediction
  • Misinterpretation of the results

You get the idea.

For the insurance example, information about your family health history, whether you smoke or drink, how much you drive daily and where, and similar factors all go into how much a life insurance policy will cost you. Remember that the insurance company is out to make money, so on average its predictions across all its costumers need to have the total people pay for policies exceed what the company pays out on them, less the usual business overhead. People who do this work in associating the risks with the financial consequences are called actuaries.

How about when you go to a big box store and buy some electronic gear or an appliance? In my experience, when I check out I am always offered the chance to buy an extended warranty. Your bet is that if you get it then you will indeed need to take advantage of the warranty for repair or replacement, so it is worth the extra money. If you don’t purchase it, then you think it is unlikely that product will break and you will have to repair or replace it at your expense if it does.

In my experience, AppleCare is worth the investment, but I digress.

The big box store is betting that enough people will buy the warranties to more than cover the cost of any who need them. Some will certainly make claims, but if enough don’t, it will be profitable.

A wrong guess by the store can be very expensive. The people who do the predictive analytics need to look at past warranty claims based on the type of product, manufacturer, geography, and perhaps time of year. There can be other factors as well, and they may be surprising. What those factors are may be highly valued corporate intelligence.

What about that restaurant? What can affect how much food it decides to order or how much staff to have working on a given day?

  • Past success with menu items
  • Day of the week
  • Time of the year
  • Weather
  • Local seasonal population variations
  • Experience of available staff

A restaurant in New York City will have different considerations from a restaurant on Main Street in the college town in which I live in the Finger Lakes region of upstate New York. The above list is not exhaustive.

By looking at past trends and combining it with weather forecasts and the college schedule to know how many students are in town, the restaurant on Main Street can better determine if it will maximize revenue and minimize expense. Spreadsheets and experience suffice for most restaurants in current practice, I believe.

So predictive analytics as a practice means getting and then looking at the available data from the past and forecasts for the future, deciding what is important and how much so, and then producing a result that will help you succeed in what you are trying to accomplish.

You don’t just throw a lot of random data into some software and expect a perfect answer to pop out. While certain classes of problems are repeatable for many clients (e.g., restaurants) others will need subject matter experts to filter out the noise, decide what data to use and how to state how relatively important it is, and then what mathematical techniques to invoke to get an answer.

It will probably require several iterations to get the model accurate. One test of it is to use historical data to see how well the model would have predicted what really happened. If the accuracy was low for that, there is no reason to think it will get better for the future.

Replacement for delicious, if you want or need one

I was looking at some blog stats just now and saw that many people landed on some old entries that discussed what I would like to have as a replacement for the delicious social bookmarking service:

The feverish searching was because of rumors that Yahoo was going to shut down the delicious service. Evidently that is not the case.

Diigo logo

I still haven’t found or coded something that would let me save my own bookmarks as I describe in the 4 blog entries above.

Therefore, I have gone back to using Diigo as my primary social bookmarking service. I very much like and use the function that can post your bookmarks to a blog up to twice daily. I use that for my Daily Links.

First impressions of iOS 4.2

Yesterday Apple released the latest version of their operating system for iPhones and iPads. iOS 4.2 is not radically new and different for the iPhone, but does bring new functionality to the iPad.

The primary thing I’ve been waiting for is folders, the ability to hold up to 20 apps in a named collection. I’ve acquired a lot of applications since I got my iPad in April and this will bring more order and structure to my screens. It will also mean that I’m more willing to get some new apps, something that Apple no doubt understood as it rushed to get this feature out. Note that you can give several folders the same name, such as “Games.”

The partial multitasking is good to have though I haven’t had time to play with it much. It’s not something I’m particularly impressed by since I think it should have been there onday one.

The really cool feature is AirPlay, the ability to stream music (and video?) to devices like Apple TV. We got one of those mainly to access Netflix, but it’s very cool to sit on the couch and beam over music into my speakers. I must admit that this is slightly redundant since I could already access my home music collection through Apple TV, but it’s an interesting indication of technology to come.

ApacheCon keynote presentation

Here are the slides I used today during my ApacheCon 2010 keynote. The presentation was called “Data, Languages, and Problems” with the abstract

Much research work over the next decade will be driven by those seeking to solve complex problems employing the cloud, multicore processors, distributed data, business analytics, and mobile computing. In this talk I’ll discuss some past approaches but also look at work being done in the labs on languages like X10 that extend the value of Java through parallelism, technologies that drive cross-stack interoperability, and approaches to handling and analyzing both structured and unstructured data.

image of cover slide

As I say at the end of the talk, I want to thank colleagues who have shared their time and wisdom with me on these topics. They include John Duimovich, Sam Ruby, Brent Hailpern, David Boloker, Bob Blainey, Stephen Watt, Vijay Saraswat, David Ungar, Tessa Lau, Rodric Rabbah, John Field, Martin Hirzel, and members of the IBM Research staff. I thank them for their conversations and sharing material with me, much of which I have liberally borrowed.

The presentation is also available on SlideShare.

What’s in a name?

This morning some people involved with OpenOffice.org forked the software. OpenOffice.org is an open source office productivity suite originally controlled by Sun and now Oracle that includes a word processor, spreadsheet, presentation application, and other software. With OpenOffice.org you get two things at once, both a website and the name of the application. That’s right, because of trademark concerns, they needed to stick the “.org” in the application name.

Whatever the exact reason, I have always thought this was silly. Even if the name was abbreviated to OO.o, something just seemed off. I’m sure most people just called it OpenOffice. I did, except when Sun people were in the room.

Sometimes you just need to come up with a completely new name instead of doing something odd to the one you love. I would have recommended that for OO.o, but no one asked me.

The new fork is called LibreOffice. Is this an office suite for astrologers or librarians who can’t spell? What if I write it as LibreOffice? Libre is a word that means “free” as in “with few or no restrictions” vs. “at zero price”. So that’s a very free-and-open-sourcey name for this new fork of the office software. I think it will take a while for people to get used to the new name, much less pronounce it. I myself am very pronunciation-challenged, and I’m waiting for someone else to say it out loud so I can repeat it to myself a few times.

In other renaming news, the open source effort formerly called the CodePlex Foundation is now the OuterCurve Foundation. I’ll admit to not knowing what a CodePlex is (unlike, say, an iDataPlex), but I’m also not sure about OuterCurve. Does it refer to racing? Baseball?

Basic naming is hard, but even harder is coming up with a name that has a website available. You also have to avoid trademarks that are for products close to what you are providing. Here “close” is relative and your sense of it probably differs from the attorney of the company that is suing you for infringement. It’s cheaper, though not always cheap, to do the early research to come up with an available name.

I have some experience with this. In the middle 1990s I came up with some software that would display text and mathematical expressions on web pages and in a standalone browser. I cleverly came up with the name “techexplorer,” thinking that the software would be used for exploring technical documents. The IBM naming police did not like this at all. It wasn’t descriptive enough to differentiate it from other possible uses of the word “techexplorer,” none of which I could find. Therefore the official name became the “techexplorer Hypermedia Browser.”

Ouch. I still cringe at that. I should have found a completely new name as I suggested above.

Good luck to all parties with their new names.

What’s holding back presentation software?

I can’t think of one thing I do with presentation software today other than creating PDFs that I didn’t do ten years ago.

We have Microsoft PowerPoint, we have OpenOffice.org Impress, and IBM‘s Symphony. Over on the Mac we have Keynote. Toss in a few others such as KOffice and we have the office productivity market.

These all have value to their users though if though don’t support ODF, the Open Document Format, in a first class way, I don’t care too much about them. On a regular basis I use Symphony and to a much lesser extent OpenOffice.org and Keynote.

I don’t view presentations on the web as a matter of course, though I do look at SlideShare occasionally. I probably get a dozen presentations a day for work. Unless I’m going to edit them, I want them in PDF format. Otherwise I expect ODF.

The software for creating and deploying presentations have changed very little in the sense that we create blank slides, use templates and predefined layouts, add text and images, and fiddle with fonts and colors. Depending on the application you choose, this is more or less easy.

If you were to create a new desktop presentation application from scratch, what features would you put into it? What would you do differently compared with the apps above?

I’ve addressed some of these ideas before in “Presentations: Still too hard to mix and match” and “Presentations: The death of complexity”.

Here’s an idea of what I would do. Note my usual disclaimer that these are my own opinions and not those of any IBM product group.

  • Forget backward compatibility with the Microsoft formats. I understand that for some of you this is a non-starter, but this is my app and I’m starting with a clean slate. I have no interest in supporting the huge number of features that minorities of users need. I also don’t want to support all the failed formats contained in OOXML. Therefore it all goes.
  • I would support ODF natively, but look at understanding the subset, if possible, that I would need.
  • Excellent PDF export is necessary.
  • Like applications such as Firefox and WordPress, I would have a well defined and documented architecture for extensions and hooks. The goal is to keep the core small, tight, and well understood. From there we would drive a third-party market for tools that extend the core. These could include input format filters and export plugins.
  • I would use Python as the macro language in the presentation editor.
  • While I would target the desktop, the architecture must facilitate multi-touch interfaces such as the iPad and the upcoming Android tablets.
  • I would not prioritize support for devices as small as a smartphone.
  • The display engine would be cleanly separated from the core components. For the desktop, I would start with a Linux port, then do the Mac, and finally Windows.
  • Themes and presentation documents need more metadata to make it simple to switch themes easily and accurately. That text box at the top of a slide in a big font is not assumed to be a title, it is known to be a title because of the information associated with it. This also allows me to create and manipulate presentations programmatically, even on servers. No guessing about slide structure is allowed.
  • I need to be able to manage groups of one or more slides for reuse, with versioning. It is still far too difficult to create libraries of slides and then put them together when necessary into new presentations. Slides and groups of slides need tags. For extra credit, slide groups might have suggested dependencies so you know, say, that you should not include these 4 slides without showing those other 2 first. Similarly, one group of slides might be indicated as being the in-depth expansion of another group.

What am I missing? What would you do differently?

Math apps and the updated iOS Developer Program License Agreement

Apple‘s changes to the iOS Developer Program License Agreement resolve some issues but still contain confusing elements for those who might want to develop sophisticated apps such as those for mathematical computation.

As I first discovered this morning in a blog post by Hank Williams, Apple has changed their iOS Developer Program License Agreement to be less restrictive on the tools used to create apps for iOS for the iPos Touch, iPhone, and iPad.

Apple’s press release states:

We are continually trying to make the App Store even better. We have listened to our developers and taken much of their feedback to heart. Based on their input, today we are making some important changes to our iOS Developer Program license in sections 3.3.1, 3.3.2 and 3.3.9 to relax some restrictions we put in place earlier this year.

In particular, we are relaxing all restrictions on the development tools used to create iOS apps, as long as the resulting apps do not download any code. This should give developers the flexibility they want, while preserving the security we need.

Those relevant sections in the license agreement are:

3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs.

3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exception to the foregoing is scripts and code downloaded and run by Apple’s built-in WebKit framework.

3.3.9 You and Your Applications may not collect user or device data without prior user consent, and then only to provide a service or function that is directly relevant to the use of the Application, or to serve advertising. You may not use analytics software in Your Application to collect and send device data to a third party.

In April I looked at the previous restrictions in the license and concluded that it would be very difficult to to implement a full featured mathematics application on the iPad.

Nota Bene: I am not an attorney and the following does not represent a legal opinion and certainly not an official IBM point of view.

The changes to sections 3.3.1 and 3.3.s improve things somewhat today:

  • Evidently you can now have an interpreter on the device. This means that you could run Python or a Java virtual machine on an iPad.
  • From 3.3.2, prepackaged scripts are allowed, so interpreted Python code is allowed if that code comes with the app.

However,

  • You cannot download code to be interpreted.
  • I am not sure if you are allowed to type in code on the iPad and then have it interpreted. I suspect not, because that code is not prepackaged with the app, even though it is not downloaded.

From the perspective of building a math app with Python or another interpreted language, I interpret this as strictly meaning that the app and libraries are fine now, but users cannot write new functions if the math app provides an interpreted language such as Mathematica and Maple do.

This is problematic. If, say, the library does not provide a factorial function, am I not allowed to write one?

I suspect that one of the things that Apple wants to avoid are system calls into the iOS operating system by random downloaded scripts. I hope it is not just a question of performance. Some computations take a very long time.

I really can’t see how this type of interpreted script for math computations should cause any problem for the iPad device, for Apple, or the users. This form of code interpretation is how things get done in these kinds of apps.

Indeed, if I have a word processing document it contains markup to indicate paragraphs, fonts, colors, and so forth. A work processing app interprets that information, which could be said to be a descriptive script. Or is ok to interpret such things? Do I need permission from Apple to do this?

I don’t think this is the last we will hear from Apple in this area. Their statement is now shorter, but it is not complete enough regarding the kinds of code that might be interpreted. I think another round is necessary to clarify matters.

On the other hand, perhaps all this is below Apple’s radar or level of caring. While that might be true, it might be better to ask permission first rather than asking forgiveness later when you submit your app for publication.

IBM Lotus Symphony 3 Beta 4 is now available

Symphony logo
IBM Lotus announced this morning that Beta 4 of Lotus Symphony 3 is now available. I use Lotus Symphony 3 in beta as my day-to-day office productivity suite.

Aside from any official statement regarding the wonderfulness of this beta, my IBM friends and colleagues inside the company have given it great reviews. This is the final planned beta before the v3 product release.

If you use OpenOffice or Microsoft Office, I recommend you give the free Symphony product a try. It’s available for Linux, the Mac, and even Windows.

Searching from the Firefox address line

One of the nice things I liked about the Chrome browser was searching from the address line, that area at the top of the screen where you would normally type in some URL like http://www.sutor.com. Firefox has a search area on the upper right, but I really like the idea of having one place to type in something meaning “this is what I want, you figure out how to get me there.”

I don’t believe that Firefox did not have this capability when it first started, but you can now set it up to do a search. In fact, it probably works to some degree right now. Try it.

To make Firefox initiate a Google search when it can’t decode what you typed as a web address, do the following:

  1. Type in about:config in the address line.
  2. If you get scared off by the warning on the next page, stop. Use the search entry area instead.
  3. Otherwise, click the button about being careful.
  4. Scroll down to where you see keyword.URL in the first column (which is called Preference Name).
  5. Double-click on it and replace the command there with http://www.google.com/search?q= .
  6. Click the OK button and you are done.

I first learned about this technique at LiewCF.com, which says pretty much exactly what I told you above. Kudos to that site and author.

Setting Firefox as your default browser

On Linux and Windows, setting Firefox as your default system browser can be done within Firefox itself. From Preferences, go to the Advanced tab and look down at System defaults. Click the Check Now button and then make Firefox the default browser from the next dialog that pops up.

Setting Firefox as the default browser on Linux

This method appears to work on the Mac as well, though not all applications appear to believe it. Therefore, the most reliable way to make Firefox your default OS X browser is, paradoxically, to do it within Safari.

On the Preferences.. | General options tab, choose Firefox from the Default web browser dropdown list.

Setting Firefox as the default browser within Safari on a Mac

Around the web: IBM adopts Firefox

Here are a few links to stories and blog entries about IBM’s announcement that it is adopting the Mozilla Firefox browser for internal use.

Saying it out loud: IBM is moving to Firefox as its default browser

I talk a lot about software in this blog but most of the discussion is at the personal level: I tried this, I experimented with that. I hardly ever talk about what I use for doing my IBM business and more rarely still do I talk about IBM’s internal policies about software use. This entry is different, and gives you a bit of a view inside the company.

Like many individuals and members of organizations, IBMers use their browsers a lot for conducting business. Our desktop and laptop software environments have some common applications but also software specific to do our various jobs. And these jobs are varied, as there are about 400,000 IBM employees around the world.

Some of the software we all use shouldn’t surprise you since we make it, such as Lotus Notes, Lotus Sametime, and Lotus Symphony.

Firefox logo

We’re officially adding a new piece of software to the list of default common applications we expect employees to use, and that’s the Mozilla Firefox browser.

Firefox has been around for years, of course. Today we already have thousands of employees using it on Linux, Mac, and Windows laptops and desktops, but we’re going to be adding thousands more users to the rolls.

Some of us started using it because it was new and fast and cool. I tried it for those reasons, but I still use it for the following ones:

  • Firefox is stunningly standards compliant, and interoperability via open standards is key to IBM’s strategy.
  • Firefox is open source and its development schedule is managed by a development community not beholden to one commercial entity.
  • Firefox is secure and an international community of experts continues to develop and maintain it.
  • Firefox is extensible and can be customized for particular applications and organizations, like IBM.
  • Firefox is innovative and has forced the hand of browsers that came before and after it to add and improve speed and function.

While other browsers have come and gone, Firefox is now the gold standard for what an open, secure, and standards-compliant browser should be. We’ll continue to see this or that browser be faster or introduce new features, but then another will come along and be better still, including Firefox.

I think it was Firefox and its growth that reinvigorated the browser market as well as the web. That is, Firefox forced competitors to respond. Their software has gotten better and we have all benefited. We’ll see this again as Firefox continues to add even more support for HTML5.

So what does it mean for Firefox to be the default browser inside IBM? Any employee who is not now using Firefox will be strongly encouraged to use it as their default browser. All new computers will be provisioned with it. We will continue to strongly encourage our vendors who have browser-based software to fully support Firefox.

We’ll offer employee education and point our people to great online information, all of which will look wonderful in Firefox. IBM has contributed to the Firefox open source effort for many years and we’ll continue to do so.

There’s another reason we want to get as many of our employees using Firefox as soon as possible, and that is Cloud Computing. For the shift to the cloud to be successful, open standards must be used in the infrastructure, in the applications, and in the way people exchange data.

The longstanding commitment of Mozilla to open standards and the quality of the implementation of them in Firefox gives us confidence that this is a solid, modern platform that should be part of IBM’s own internal transformation to significantly greater use of Cloud Computing. Examples of this already include Blue Insight, an internal cloud for business analytics, and LotusLive, for online collaboration.

It is not news that some IBM employees use Firefox. It is news that all IBM employees will be asked to use it as their default browser.

As you think about the browser you use at home and at work, consider the reasons we have stated for our move. It’s your choice, obviously, but Firefox is enterprise ready, and we’re ready to adopt it for our enterprise.

Update on my Firefox extensions

It used to be that I tried a new Firefox extension every day. Since the Firefox browser from Mozilla became a standard tool for how I do business and generally access the web, I’ve focused less on trying new things and more on tuning the environment I have. I then replicated that environment across the various computers I use with the various operating systems on them.

I don’t use Firefox exclusively. I’m a software guy and I love to try new things, so I certainly have Chrome, and on the iPhone and iPad I use Apple‘s Safari browser. I’ve played with Opera but never stuck with it. Firefox is the browser I use when I need to know that things will work and look right.

I’ve decided that I am going to spend a little time each day for a few days and check out what’s been going on in the Firefox extension world. Before I do that, however, I want to list the extensions I do use now to establish the baseline.

My Firefox extensions

  • Adblock Plus: I’ve tried to live with website ads, especially when I experimented with them here, but they were just too annoying. This addon removes most of them and there are subscriptions to keep your blocked list up to date.
  • ColorfulTabs: This makes my tabs appear in different pretty colors. Not essential, but it really improves the user interface experience.
  • Diigo: I use Diigo to save and publish the daily links that appear in my blog, and this is their official addon to make it easy to capture those bookmarks.
  • Firebug: This addon is a great too for debugging web pages when things go wrong. I mostly use it for figuring out why CSS isn’t doing what I thought it should.
  • OptimizeGoogle: This cleans up some behavior in various Google apps, makes some more secure, and gets rid of even more ads.
  • Xmarks: This synchronizes my bookmarks across multiple browser types across multiple computers and devices.

12 days with an iPad

Twelve days ago I got a new iPad with WiFi and 3G and promptly took it on a one week business trip to Europe. Generally, I think it lived up to its hype and is quite elegant. I very much like the choice of apps and I’m excited about what the changes to the UI will mean to software and the industry. Coupled with the upcoming tablets based on open source, I think competition will drive some real innovation in this space.

There are two areas where I repeatedly found myself thinking that the tablet was less convenient than a laptop: multitasking and text editing for my blog.

It is well known that the iPad does not do multitasking in general, though the Apple apps can do it. This means that generally when you move from one app to another, the first saves state and shuts down. When you want to go back to that first one, it restarts and lets you reload your data. This is not fast nor convenient, and gets tiresome quickly. I don’t mind the one-app-per-screen rule, but the slow context shifting hurts productivity. Better multitasking will come later this year, though it will not be the same as we used to on modern operating systems like Linux or OS X.

The second area, text editing, is just awkward. When I create a blog entry I often include links, lists, and some special formatting. This involves selecting text, copying, opening forms, pasting, and so forth. Copying text from one app to another can be slow because of the multitasking, but the general browser-based interfaces such as the WordPress admin and editing panels have been tuned for mice and full keyboards, not fingers. Coupled with not being able to use social bookmarking sites like Diigo in an easy way means that I won’t be doing much on my iPad for my blog for some time, other than the really easy things like approving comments.

Things I do like are the interfaces for music, App Store, video, the Kindle App, maps, and some games like Scrabble. Using a browser with a screen that’s big enough to see a lot of the page is a big improvement over the iPhone. Safari on the iPad needs tabs, again for speed of switching.

Presentations: Still too hard to mix and match

In a post last week I noted that the presentations I produced ten years ago were more complicated than those I made and used today. Many of the whiz-bang features in presentation software are just not things that I use, such as transitions, sound, and animations. My slides end up in PDF more often that not.

Here’s one thing I really expected we would have licked by now: much better facilities for mixing and matching slides from existing presentations so that new ones could be created.

There are several possible levels to this:

  1. Slide import without screwing up or deleting content
  2. Slide import that actually tries to make things work in the new template, with decent results
  3. An interactive learning mode that guides the transition of slides from the old formats to the new

I’m usually thankful if I get the first, giddy with the second, and the third is science fiction as far as I’m concerned.

Assuming that we get the slide import problem fixed some day, there’s something else I really want. Imagine a general slide deck where most of the deck is useful to any audience. Some of the pages, however, need to have alternate forms such as products discussed, partners mentioned, customers referenced, and geographies discussed.

This is really a higher form of template where the “build” allows you to slide in the different versions of slides. In my case, for example, I could create a deck where the customer references were all from Asia Pacific and equally used Linux distributions from Red Hat and Novell.

To do this, I would need to understand where in the presentation the variations could occur. Then I need to have each set of versions have similar formatting and be constructed as easily as possible. Maybe something like mail-merge could work for the graphics and slide contents?

Each of the versions should be tagged or categorized so it would be easy to see which are the possibilities for each variable spot in the deck. Essentially, I would have a library of slides or slide data with some semantic tagging. This library needed to be maintained with slides or data added, deleted, or retagged.

Once I had this, I would expect a really nice deck building user interface to glue together the pieces for me.

So here is my challenge, particularly to those who are using ODF, the OpenDocument Format, and the ODF Toolkit: build all this. Forget for the moment about the presentation software, but rather the information on the slides, how to categorize the pieces, where the variable content is, and how the deck can be visually and semi-automatically constructed.

This last week someone on Facebook was bemoaning that he was turning 40 and yet we still didn’t have jetpacks. This presentation stuff isn’t rocket science, but we really should be much further along by now, in my opinion.

Also see:

Presentations: The death of complexity

I don’t know about you, but the presentations I create today are much simpler in design than those I created ten years ago. For example, I now never create presentations that include

  • animation and builds
  • slide transitions
  • sound
  • video

Any presentation I create today that will be shared with others ultimately ends up as a PDF file. Therefore the above features won’t necessarily work nor do I think they really add much other than being distractions.

I do care about

  • good support for templates, including the ability to efficiently change templates and merge parts of presentations that use different templates
  • precise and easy placement of presentation slide elements
  • translation to compact and full fidelity PDF files

Note that the best presentation software can allow people to create truly ugly slides. Conversely, someone who is a true presentation artist can use crummy software to create pretty good slides, at least some of the time.

So just how much is really needed to create and represent presentations like those above? For the representation question, an appropriate query would be “what subset of the OpenDocument Format (ODF) is necessary to include all the information necessary, and nothing more?’.

The creation side can vary quite a bit. Assuming you are using ODF, it would be possible though tedious to use a text editor and command line tools to create a file. I wouldn’t want to do that and would expect something with a better user interface to make slide creation and reuse easy.

Because of all the features they need to support, I think most presentation software is overkill when it comes to creating my kinds of presentations. They do much more than I need.

On the other hand, I’ve been very impressed with the progress made in visual editors in Web 2.0 software such as WordPress. While not truly WYSIWYG, they visually support features such as tables, lists, font styling, and images. Within five years I expect them to have much better support for CSS while editing, and hence be even closer to the final browser rendering while the document is being created.

This begs the questions:

  • By 2020 will current presentation formats and software be completely obsolete?
  • Will we instead be using HTML 5 and CSS to hold the content, structural, semantic, and formatting information?
  • Can we use in-browser applications to create and show beautiful slides?
  • Can we move to something simpler and without the legacy baggage?

If we do this, move to something more minimal that still allows us to create beautiful but static slides, we can then start adding back in some of the features that HTML 5 will support.

Browsers as well as software like Drupal, WordPress, and fully formatted email have significantly reduced the need for word processors. I think presentation software will be the next category of productivity application to be affected, with spreadsheets coming last.

Also see:

Really setting the default browser on a Mac

I was having a problem last week with my Mac: even though I set my default system browser to Google Chrome, one application just refused to believe it wasn’t Firefox any longer. (And no, that application wasn’t Firefox itself!)

Though I tried several times within Chrome and Firefox to toggle the system browser to end up being Chrome, that one application was being recalcitrant.

Today I got an email from my friend and IBM colleague Kelvin Lawrence with the workaround to the problem: go into Safari and under Safari > Preference > General toggle the browser to something non-Chrome and then back to Chrome. This seems to do something a little extra and it did the trick. That one application now opens web pages in Chrome.

Thanks for the fix, Kelvin!

Three Google Chrome extensions to get you started

I’ve recently started using the Google Chrome web browser and have made it the default over Firefox on several of my machines. Though Firefox has thousands of addons, or extensions, I only really use about half a dozen. That means when I move to a different browser I might be missing some functionality, but not a lot.

Google Chrome logo

Here are the first three Google Chrome extensions I’ve started using, the first two of which are direct replacements for their Firefox counterparts.

  • The Diigo bookmark extension. Diigo is a “Web Highlighter and Sticky Notes, Online Bookmarking and Annotation, Personal Learning Network.” I use it to produce the Daily Links that are published on this blog. I’ve run hot and cold on Diigo over the last few years, but I’m back to using it as the best thing around to save and share things that I’ve read on the web.
  • XMarks Bookmark Synch tool. XMarks can save both bookmarks and passwords across multiple machines and multiple browsers, though I only use it for bookmarks. When I fire up a new machine and install a new Linux image, I know I can have all my bookmarks ready to go in a few minutes. Google Chrome also has synchronization capability, but it is limited to that browser, though on multiple operating systems. XMarks works in Firefox, Chrome, Safari, and Internet Explorer.
  • TooManyTabs. As you open more and more tabbed windows, the tabs get narrower and narrower, so much so that you can’t read the labels. By clicking the TooManyTabs but, a new window opens up that clearly shows all your windows and what’s in them. Thanks to Kelvin Lawrence for his recommendation of this extension.

Second Life tip of the day

It’s been about three years since I posted a tip of the day for the Second Life virtual world, but since I’m attending a virtual meeting in IBM right now, I’ll put this one up:

You can turn off the (annoying) typing animation and sounds by going into Edit | Preferences | Text Chat and unchecking the box next to “Play typing animation while typing.”

You might also want to go into Audio & Video and lower the sound for UI and turn up the sound for Voice.

First impressions: Twinity virtual world

Twinity logo

I recently had a chance to try out the beta for a new virtual world called Twinity. Like Second Life, Twinity aims to be a virtual world where you can wander around, meet and talk with people, shop, and augment your avatar and your living space, if you have one. This is a beta, and so there are some issues, but I think it’s a pretty cool approach.

Continue reading

IBM releases Lotus Symphony 3 Beta 2

Lotus Symphony logo

IBM just released Lotus Symphony 3 Beta 2:

Lotus Symphony 3 Beta 2 represents a major new advancement for our Lotus Symphony users. Based on current OpenOffice.org 3 code stream. Lotus Symphony 3 Beta 2 offers loads of new features and capabilities and improved file fidelity. The Lotus Symphony team is excited to get it out to you and get your feedback.

This is a very big upgrade as is indicated by the jump from version 1.3 to version 3. The beta is available for Linux desktops, Mac OS X, and even Windows.

Also see the ZDNet blog entry “IBM launches Lotus Symphony 3 beta; Office alternatives pile up” by Larry Dignan for some screen shots.

Virtual world resources and directory

I’ve just added a page to this site containing links to resources and books about virtual worlds and 3D networked online games. Suggestions for additions welcome.

In looking through the available books, I was struck by the number that have been published in the last six months. That said, those addressing education and virtual worlds tend to be quite expensive. I understand the issues around lower volume and smaller audiences, but I’m not sure those high prices will attract many readers. It’s a general problem in the book world, especially the academic book world, but it’s still striking in comparison to the more major market books.

Next generation virtual worlds: preliminaries

I’m about to start another series of blog entries on what I see are some of the most important issues to consider for the next generation of virtual worlds. Since I’ve written a fair amount before on these networked 3D immersive environments, I thought it would be worthwhile to provide a list of my older blog entries to provide some history of my thinking, make it easy to see where I agree or disagree with what I thought a few years ago, and to ensure that I’m at least considering everything I once thought important.

So here are the previous blog entries in chronological order:

The Sirikata open source platform for games and virtual worlds

As many of you know, I’m still quite interested in virtual worlds and 3D immersive environments though I certainly don’t spend as much time in Second Life as I did several years ago. So from time to time I poke around and see what people are working on, and tonight I came across the Sirikata project from the Stanford Virtual Worlds Group in the Computer Science Department at Stanford University.

Documentation is a bit sparse, but the code has been released under the BSD license and is written in C++. Here’s the teaser video they’ve produced:

Sirikata Teaser from Sirikata on Vimeo.

The big player in open source virtual world platforms is OpenSim, an “extended subset” of Second Life. Croquet is another open source entry in this space.

For some of the research work by the Sirikata team, see

Daniel Horn, Ewen Cheslack-Postava, Tahir Azim, Michael J. Freedman, Philip Levis, “Scaling Virtual Worlds with a Physical Metaphor,” IEEE Pervasive Computing, vol. 8, no. 3, pp. 50-54, July-Sept. 2009, doi:10.1109/MPRV.2009.54

Fixing my Firefox crash

Last night my Firefox browser started to crash. Not occasionally, but every single time I started to to type something in the search bar in the upper right hand corner. What the heck happened?

Firefox logo

There are several possibilities when an application suddenly starts getting buggy:

  • Gamma rays from outer space changed some of the bits on your hard drive, thereby messing up your software.
  • You are having hardware problems, such as memory glitches or hard drive problems, that are causing instability.
  • Your machine has been infected with a virus or a worm.
  • Some other application messed up a file that the application in question uses.
  • You deleted or otherwise mangled a configuration file or (for Windows) a registry entry.
  • You installed an operating system update that changed something, and that eventually caused your application to break.
  • You installed an update to the application itself.
  • For applications that support extensions, addons, or plugins, you added or updated one of those, and it broke your application.

When this bad behavior started, I popped over to another machine running the same operating system and checked to see if Firefox there was broken. It wasn’t.

Next I tried doing the same thing that demonstrated the problem 5 or 6 more times to see if it went away as magically as it appeared. It did not.

Ah, I thought, I bet I have Firefox 3.5! Will upgrading to Firefox 3.6 fix the problem? It didn’t, though it did tell me that several of my extensions were not yet available for Firefox 3.6.

Next I considered whether now was a perfect time to switch to Google Chrome. Perhaps, but that was avoiding the problem rather than fixing it.

I then completely, or so I thought, wiped Firefox from my machine and reinstalled it from scratch. That did not fix the problem.

I wondered … are my old extensions still installed? They were, so evidently my cleanup had been incomplete. I uninstalled them all and restarted Firefox. The problem was gone.

At that point I vaguely remembered that Firefox had asked to install some extension updates and I was so busy with something else that I just accepted it and got on with my work. That was before the problem started. Hmmm.

I started reinstalling my primary extensions and checked after each one to see if I had the problem. I didn’t, but I stopped after five. I suspect the problem was either in Firebug or YSlow, but I didn’t verify. I know that Adblock Plus, COLT, ColorfulTabs, Diigo Toolbar, and XMarks are not causing issues, and those other two extensions are the only ones I did not reinstall.

The moral of this, as with most debugging, is: if you change something and then your system is broken, what you changed caused the problem. It’s not always direct cause and effect, and you may not notice the problem for a while, but it’s good to strip back to basics and then add things in one by one until you can find the culprit.

Update: Consensus seems to be that the update to YSlow is problematic.

Press Release: “ZSL Unveils ‘PowerCube’ DaaS in the U.S., Africa and India”

I’m a little tardy in noting this, but this last week at Lotusphere, IBM partner ZSL issued the following press release, which begins:

ZSL logo
Lotusphere, Florida (PRWEB) January 19, 2010

ZSL, a leading ISV & Global Software Solutions and Services provider, today launched “PowerCube” DaaS (Desktop as a Service), an open source-based desktop collaborative solution with supporting ZSL consulting practice. Available today in the U.S., Africa, and India, “PowerCube” will help mid-market customers using proprietary platforms to migrate to the IBM Client for Smart Work on Ubuntu’s operating system.

Intended for PCs, laptops, netbooks and thin clients as an alternative to commercial desktops and platforms, the ZSL “PowerCube” solution includes packaged services for migrating to the IBM Client for Smart Work, from user segmentation, TCO analysis, BPM based role identification and SOA, to application migration, pilot and production deployment. The DaaS capabilities provide customers with the option of using virtual desktops based on VERDE from Virtual Bridges on a private cloud managed by ZSL or on customer premise.

(I added most of the links in the text.)

Press Release: “IBM Client for Smart Work Available Through Business Partners in India”

Here’s a another press release from today involving IBM, Symphony, Lotus Live, Ubuntu Linux, and Virtual Bridges. We’re continuing the rollout of the partner-led IBM Client for Smart Work:

IBM Client for Smart Work CD

IBM Client for Smart Work Available Through Business Partners in India

ORLANDO, FL & BANGALORE, India – 18 Jan 2010: IBM (NYSE: IBM) today announced the immediate availability of IBM Client for Smart Work in India through business partners. The IBM Client for Smart Work, IBM and Canonical’s popular cloud-and Linux-based desktop package, is designed to help companies do more with less and lower desktop computing costs by up to 50 percent. CIO’s, IT directors and IT architects from all types of organizations in India, even those that typically cannot afford new PCs, can now gain immediate access to collaboration capabilities to help them work smarter, with the simple download of the IBM Client for Smart Work onto various thin clients, such as netbooks and other devices.

“Government leaders, CEOs and CIOs are seeking an open, cost effective and collaboration rich client strategy to leapfrog into the 21st century,” said Pradeep Nair, director of IBM India Software Group. “The IBM Client for Smart Work solution brings together the strengths of cloud-based collaboration, virtual desktops, netbook devices and open source, supported by a strong ecosystem of business partners, to help Indian innovators harness the next wave of growth.”

The collaboration package runs on Ubuntu Linux operating system available from Canonical and provides the option to deliver collaboration through the Web in a cloud service model. The Client comes with IBM Lotus Symphony, IBM LotusLive iNotes/Connections and IBM Lotus Notes/Domino, with the option to add IBM Lotus Connections and IBM WebSphere Portal, as well as virtual desktop capabilities using VERDE from Virtual Bridges.

With the mounting interest in this solution, IBM today also announced that Simmtronics Semiconductors will ship their new Simmbooks (netbooks) with IBM Client for Smart Work on Ubuntu already preloaded to clients in India, US, Singapore, Hong Kong, Indonesia, Thailand, UK, and Vietnam.” We launched Simmbooks based on the high demand for netbook type devices for enterprises worldwide,” said Indrajit Sabharwal, managing director, Simmtronics Semiconductors. “Delivering Simmbooks with IBM Client for Smart Work on Ubuntu will help our customers lower their total cost of ownership and be on the forefront of innovation.”