The content on this site is my own and does not necessarily represent my employer’s positions, strategies or opinions.
- I’ve been playing around with Swift, the new programming language from Apple, for a few days and I’ve been quite …
- The Supermoon over Cranberry Lake, New York, in the New York Adirondack Mountains on 11 July, 2014.
- Yesterday IBM and Apple made an important announcement about partnering to significantly growth the use of mobile via Apple devices …
This post is a great example of why you should never say that you are starting a new series of blog entries. In February of 2010, I wrote a blog post called Virtual Life with Linux: Standalone saying on 9.10
As a complement to my Life with Linux blog series, I’m introducing another series which explores what I can do in virtual worlds and immersive Internet environments on Linux.
I wrote two entries, and that was it. Well, here is the third entry, notes from trying to install the latest version of OpenSim on Ubuntu Linux 13.10. I’m not going to go through all the steps involved, but mostly talk about some of the glitches I encountered and how I resolved them.
First, some notes on Ubuntu 13.10. I have a dual boot pc with Windows 7 and Ubuntu on it. I used to do a lot with Linux because it was my job and also because I loved the experience of trying all the distros, seeing what was new, and playing with the features. Well, I moved on to a job involving mobile and then running the math department in, and I really did not touch Linux for a long time. Long as in the version of Ubuntu on my machine being from 2009.
I fired this up several weeks ago and started the upgrade process, which was excruciatingly slow. Somewhere in there I accidentally hit the power button on the computer and that pretty much wiped out the Ubuntu image. Don’t do that. I eventually burned a DVD of Ubuntu 13.10. Once again the updates were really slow.
This weekend I did the clever thing and did a web search for “slow Ubuntu updates.” The main suggestion was that I find a mirror closer to me, and this made a huge difference. I went into the Ubuntu Software Center, picked Edit | Software Sources, went into Download From, picked Other…, and found a mirror 40 miles from my house. Problem solved.
32 bit Libraries
I installed the 64 bit version of Ubuntu but you are going to need the 32 bit libraries. There’s a lot on the web about how to do this for older versions of Ubuntu, how you should use multiarch libraries, how you don’t need to do anything at all, and so on. Eventually I found this solution and it worked, from the forums for the Firestorm virtual world viewer. There are other ways to accomplish the same thing, but this does the job.
sudo apt-get install libgtk2.0-0:i386 libpangox-1.0-0:i386 libpangoxft-1.0-0:i386 libidn11:i386 libglu1-mesa:i386 sudo apt-get install gstreamer0.10-pulseaudio:i386
You need the complete mono package, not just what you install from the Ubuntu Software Center.
sudo apt-get install mono-complete
See the OpenSim build instructions for other platforms.
Install the client and the server from the Ubuntu Software Center. You will be asked for a root password, so write it down somewhere.
There are several ways of getting and installing OpenSim. When I last did this four years ago, I took a “from scratch” approach but I’m doing it more simply now. I used the popular Diva Distribution of OpenSim which comes set up for a 2×2 megaregion (that is 4 regions in a square that behave like one great big region). What you lose in some flexibility you gain in ease of installation and update. Once you download and expand the files, start reading the README.txt file and then the INSTALL.txt file. Other files will tell you more about MySQL and mono, but you did the hard work above.
Since I am not connecting this world to the Internet, I did not bother with the DNS name, I simply used localhost at 127.0.0.1.
Follow the instructions for configuring OpenSim and getting it started. You’ll need to give names for the four regions, which I’ll call R1, R2, R3, and R4. These are laid out in the following tile pattern:
You will need to know this if you decide to change the terrains for your world.
For example, suppose you had four terrain files called
se.raw in the
terrains subdirectory of your OpenSim
Then you would issue the following from within the OpenSim console to set the terrains for the regions:
change region R1 terrain load terrains/sw.raw change region R2 terrain load terrains/nw.raw change region R3 terrain load terrains/se.raw change region R4 terrain load terrains/ne.raw
A web search will find you many options for terrains. Basically, they are elevation files for your region.
Getting a Browser
I believe that all the popular common browsers out there for OpenSim are evolutions of some major versions of the OpenSim page has details on your options. If you have a choice, get a 64 bit browser if you are using a 64 bit Linux. I’ve had good luck with both Firestorm and Kokua.browser after they were open sourced. This
Maria Korolov extensively describes the different ways of getting an OpenSim region up and running in her article OpenSim 102: Running your own sims. In particular, she discusses New World Studio, and I’ll be trying to get that running on my MacBook.
Several years ago I spent quite a bit of time in Second Life when it was the hot 3D social world. The promise was that you could build and visit worlds that had been uniquely constructed by the users. As such, it was dynamic environment that tended to be slow as all the shapes, buildings, and textures were loaded.
People can customize their in-world presences extensively, from body shape to the clothes and decorations worn. Indeed, you don’t even need to look like a person. Note, however, that you probably should not show up to a business meeting in in the form of a squirrel, as my now-retired colleague Irving Wladawsky-Berger once said.
Over time, Second Life fell out of fashion as a world where businesses could set up sites where clients or interested people could visit, learn about products or services, and talk to real people, albeit in avatar form.
For internal business meetings, the lack of truly secure conversation was a problem. We used teleconferences for the voice, and Second Life for the environment. As meetings went on, participants often went inactive, or fell asleep, in Second Life, and we were back to phone meetings as usual.
Second Life lives on today as a social world. That’s never been much of an interest to me, but to each his or her own. It seems to be quite vibrant across a broad range of what “social” means.
My interest in it was always more in the construction aspects, and I’ve written extensively about the techniques involved. See Building in Second Life, By Example. I still get many links to this site from people looking to build moving doors, for example. I also had a long series of blog entries about how to do things in Second Life called My Second Life. Note that this is from 2006, so it is getting a bit old.
You can see all my writings on Second Life by going to the top of this page and entering “second life” in the search box on the right side.
Here is the net for me with Second Life: it is too expensive to be as slow as it is, especially if I only want to use it as an advanced 3D building environment. While new ways of building objects have been introduced, it’s hard to see a lot of difference from the way it was five years ago. I still visit from time to time, but I own no land and spend no money there.
OpenSimulator, or available.for short, is a reimplementation of the Second Life server in open source. It is written in C#, so requires Windows or the Mono environment on Linux. It does not include a browser, but several are
Other than the OpenSim site itself, the best source of information about the technology and the worlds built with it is Maria Korolov’s Hypergrid Business. It is excellent.
Some of the features of OpenSim include:
- an active development community
- better in-world programming options
- the ability to host a world on your own computer, which is completely free
- many online paid hosting options
- the ability to connect your world to several choices of “grids,” or collections of worlds
- teleporting from one world to another across a grid
This means that I could set up a world on my local computer, do all the building I want on it, save an image, and then transfer it to a hosted server. If you can and want to connect your computer to the Internet, you can host your world from there and have others visit it.
To see a modern use of OpenSim, read the article $250,000 project models cities in OpenSim.
Some of the potential downsides are:
- hosting providers come and go, though some have been around for years
- it may be more difficult to find assets you need at the quality you want, for example textures, but there are guides for finding free content
- it is probably best if you have some technical chops or know someone who does
So Second Life costs money to own land and to buy some assets, and is more restrictive. OpenSim and the worlds and grids associated with it provide more freedom, but you are more on your own and there might be some long term risks related to hosting. For me, the freedom is worth the risk.
In 2010 I wrote a blog entry called Virtual Life with Linux: Standalone OpenSim on . I’ve recently gone through the experience of doing this on Ubuntu 13.10. I’ve published some notes on what I did this time to install on my pc in 9.10Virtual Life with Linux: Standalone OpenSim on Ubuntu 13.10.
I’ve been coding, a.k.a programming, since I was 15 years old. Since then I’ve used many programming languages. Some of them have been for work, some have been for fun. I mean, really, who hasn’t done some programing while on vacation?
Somewhat chronologically, here are many of the languages I’ve used with some comments on my experience with them. In total I’ve written millions of lines of code in the various languages over four decades.
Basic: This is the first language I used. While primitive, I was able to write some long programs such as a Monopoly game. In between coding sessions, I saved my work on yellow paper tape. I fiddled with Visual Basic years later, but I never wrote anything substantive in it.
APL: Now we’re talking a serious language, and this is still in use today, particularly by one statistician in my group at . I was editor of the school newspaper when I was a senior in high school and I wrote a primitive word processor in APL that would justify the text. It sure beat using a typewriter. Some modern programming languages and environments like R and MatLab owe a lot to APL. They should mention that more.
FORTRAN: My first use of this language was for traffic simulations and I used a DYNAMO implementation in FORTRAN in a course I took one summer at the Polytechnic Institute of New York in Brooklyn. Forget interactive code editing, we used punch cards! FORTRAN was created at Research, by the way.
PDP 11 Assembler: I only took one Computer Science class in college and this was the language used. Evidently the course alternated between using Lisp and Assembler and the primary language in which the students wrote. However, our big project was to write a Lisp interpreter in Assembler which got me hooked on ideas like garbage collection. No, I did not and do not mind the parentheses.
csh, bash, and the like: These are the shell scripting languages for UNIX, Linux, and the Mac. I’ve used them on and off for several decades. They are very powerful, but I can never remember the syntax, which I need to look up every time.
Perl: Extraordinary, powerful, write once and hope you can figure it out later. Just not for me.
PL/I: Classic IBM mainframe language and it saved me from ever learning COBOL. When I was a summer student with IBM during my college years, we used PL/I to write applications for optimizing IBM’s bulk purchases of telecommunications capacity for voice and data. It was basically one big queuing theory problem with huge amounts of data. It was big data, 70s style.
Rexx: This language represented a real change in the way I viewed languages on the mainframe. Rather than being obviously descended from the punch card days, it was a modern language that allowed you to imagine data in more than a line-by-line mode, and help you think of patterns within the data. It was much easier to use than than the compiled languages I had used earlier. My primary use for it was in writing macros for the XEDIT editor.
Turbo PASCAL: This was my main programming language on my IBM PC in the 1980s. The editor was built-in and the compiler was very fast. I used it to write an interactive editor like XEDIT for the mainframe with it, as well as a Scheme interpreter.
Scheme: A very nice and elegant descendant of Lisp, was considered an important programming language for teaching Computer Science. That role has been largely usurped by Java. I liked writing interpreters in Scheme but I never did much actual coding in it.
VM Lisp: This was a Lisp dialect developed at IBM Research for mainframes. My group led by Dick Jenks there used it as the bottommost implementation language for computer algebra systems like Scratchpad, Scratchpad II, and Axiom. Like other Lisps this had two very important features: automatic garbage collection and bignums, also known as arbitrarily large integers.
Boot: An internal language at IBM Research built on Lisp that provided feature like collections and pattern matching for complex assignments. It had many advantages over Lisp and inherited the garbage collection and bignums. From time to time I and others would rewrite parts of Boot to get more efficient code generation, but the parser was very hard to tinker with.
Axiom compiler and interpreter languages: The IBM Research team developed these to express and compute with very sophisticated type hierarchies and algorithms, typical of how mathematics itself is really done. So the Axiom notion of “category” corresponded to that in mathematics, and one algorithm could be conditionally chosen over another at runtime based on categorical properties of the computational domains. This work preceded some later language features that have shown up in Ruby and Sage. The interpreted language was weakly typed in that it tried to figure out what you meant mathematically. So
x + 1/2 would produce an object of type
Polynomial RationalNumber. While the type interpretation was pretty impressive, the speed and ease of use never made the system as popular as other math systems like or .
C: Better than assembler, great for really understanding how code translates to execution and how it could get optimized. Happy to move on to C++.
C++: Yay, objects. I started using C++ when I wrote techexplorer for displaying live TEX and LATEX documents. I used the type system extensively, though I’ve always strongly disliked the use of templates. Several years ago I wrote a small toy computer algebra system in C++ and had to implement bignums. While there are several such libraries available in open source for C and C++, none of them met my tastes or open source license preferences. Coding in C++ was my first experience with Visual Studio in the 1990s. The C++ standard library is simply not as easy to use as the built-in collection types in , see below.
SmallTalk: Nope, but largely because I disliked the programming environments. The design of the language taught me a lot about object orientation.
Java: This is obviously an important language, but I don’t use it for my personal coding, which is sporadic. If I used it all day long and could keep the syntax and library organization in my head, that would be another story. I would be very hesitant to write the key elements of a server-side networked application in something other than Java due to security concerns (that is, Java is good).
Ruby: Nope. Installed many times, but it just doesn’t make me want to write huge applications in it.
PHP: The implementation language for and , in addition to many other web applications. If you want to spit out HTML, this is the way to do it. I’m not in love with its object features, but the other programming elements are more than good enough to munch on a lot of data and make it presentable.
Objective-C: Welcome to the all world, practically speaking. It hurts my head, but it is really powerful and Apple has provided a gorgeous and powerful library to build Mac and iOS mobile apps. My life improved when I discovered that I could write the algorithmic parts of an app in C++ and then only use Objective-C for the user interface and some library access.
Python: This is my all time favorite language. It’s got bignums, it’s got garbage collection, it’s got lists and hash tables, it can be procedural, object-oriented, or functional. I can code and debug faster than any other language I’ve used. Two huge improvements would be 1) make it much easier to create web applications with it other than using frameworks like Django, and 2) have Apple, , and Microsoft make it a first class language for mobile app development.
In this series I’m looking at my experiences using social media as a business professional. In this entry I examine the rules and policies I personally use regarding enterprise social media.
In the introduction to this series of blog entries, I asked several questions regarding my use of particular social media services, and how I manage the intersection of my personal and professional lives in them.
Here I’m going to look specifically at enterprise social media. That is, services that allow you to blog, post status updates, comment on the status of others, all inside your company’s or organization’s firewall. I’ll assume that what is posted is seen only by people in your organization, not by the general public.
I think use of multiple social networks only has value if you do different things on each of them. If one service targets a specific audience, use it with those people in mind. If you are more or less throwing the same material at all of them, I think you are spamming people, hoping it will lead to some sort of positive outcome for yourself. Therefore, if you post blog entries externally, there is no need to repost internally, but perhaps a link will do.
Enterprise social media is tricky because what you post could be seen by your bosses, your colleagues, and your employees, not to mention HR. You want to keep it relevant to your work life but you do need to be aware of the politics and sensitivities involved.
Do not use internal enterprise social media to state how brilliant you think management and their status updates are and how much their postings have changed your outlook on life, the way you’ll raise your children, or the very essence of your being. It’s fine to just click “Like.”
Be constructive, don’t use use enterprise social media to build a mutual admiration society. Ask questions, get a better understanding of the details of how the business is run and why decisions were made, and improve upon the suggestions of others. Don’t ever say in a response posting “What is more important …” but rather say “What is also important …”.
Share what you have learned about making products or service engagements better. Pass along dos and don’ts about working with clients. Don’t ever criticize a client as individuals or a company in your postings. Think about how new technologies like mobile and analytics can help you serve customers better and share your thoughts with your colleagues.
Be interesting. Be a person.
The social media service I use inside Connections.is
Here are answers to the standard questions I’ve used in all these postings.
Who will I follow?
I follow (or connect with) people I know and have worked with directly. IBM has over 400,000 employees. If I connected with everyone, I could never find anything of value in the stream of status updates.
Who will I try to get to follow me? Who will I block?
I’ve suggested to my current employees that I would be honored if they connected with me, but it is completely optional. If anyone expresses uneasiness that “the boss” is watching what they post, I won’t follow them. No one is blocked (I’m not even sure I could if I wanted to).
How much will I say in my profile about myself?
Much of my work contact information is pulled up automatically. I’ve added a few other items, plus links to my external social networking activities. I certainly don’t list my personal hobbies in my inside-IBM profile, though I don’t think that is out of bounds in general. Since I cover my personal social networking elsewhere, I don’t redundantly add things in my internal profile.
What kinds of status updates will I post? How often will I post?
Though many people blog internally, I don’t. When I first started blogging in 2004 I had a WebSphere blog, then a developerWorks blog, an internal blog, and then onepersonal blog and one WordPress business blog. It didn’t take me long to decide I needed just one, and that is what you are reading here.
If I had something to say about open source, standards, Linux, WebSphere, or mobile, I would not have a special inside-IBM version and a different outside-IBM one. For one thing, this helped me keep the messages straight! Since I spoke publicly quite a bit, I needed to make sure that I did not say things internally in print that might inadvertently get repeated externally.
I do use Connections Communities now to share very specific internal information with named groups of people, such as the worldwide Business Analytics and Mathematical Sciences community. This is quite useful.
In terms of status, I post questions, some simple statements about IBM activities in which I’m engaged, and occasionally some critiques of features of processes or software.
While it’s fine to inject the occasional comment about non-work matters, I do not recommend that you use a lot of bandwidth in your company’s social networking service discussing American Idol or the World Cup. Take it elsewhere, perhaps to.
When will I share content posted by others?
Sometimes if I think it is really important or answers a question someone posts.
How political, if at all, will I be in my postings?
Zero, nada, zip.
How much will I disclose about my personal details and activities in my postings?
On what sorts of posts by others will I comment?
Anything I see where I might add something useful to the conversation.
What’s my policy about linking to family, friends, or co-workers?
I’ll link to co-workers to share what they’ve said or to note them as experts on a particular subject.
Blog entries in this series:
It’s been quite some time since I’ve posted an entry here. It’s been a very busy summer both in my personal life as well as my business one. I changed jobs withineffective August 1: I went from the IBM Software Group where I co-led the Mobile Enterprise strategy as well as led Product Management for the WebSphere Application Server, over to IBM’s Research Division. Here in Research I’m the VP for Business Analytics and Mathematical Sciences (BAMS).
This is actually a return to Research for me. I spent 1984 to 1999 in the Mathematical Sciences Department, as it was called then, including three years away at Princeton finishing my Ph.D. in theoretical Mathematics. During my time since I left Research I had various jobs in IBM Corporate and in the Software Group working on and leading efforts in web services, standards, open source, Linux, WebSphere, and Mobile.
I now am responsible for a wordwide community of several hundred researchers focusing on basic and applied science in analytics and optimization. I’ve spent a lot of time over the last few weeks meeting my team members, coming up to speed on the work of BAMS as well as the Research Division and, well, doing the job.
It’s very different from what I’ve been doing over the last few years. When I can discuss it, I’ll talk about the work, what it means and why it is important, what its importance is for the industry, and how it will affect us all. In that last sense, I’ll talk about analytics and optimization in general, and not just about what we are doing here.
There’s a lot of confusion about analytics and my sense is that the term is applied much too widely. That said, there are many more areas of applicability than I think many people realize. So it’s a really a question of sharpening the definitions and terms used, and then employing them correctly.
I also plan to get back to some of the things in my personal life that I have not written about recently. For example: yes, the sailboat is in the water, but not Lake Ontario.
In this post I talk aboutmobile products and what happened at a large IBM conference. As a result, it is more specific to IBM’s offerings than some of my other blog entries.
This week I’ve been in Las Vegas at the IBM Impact conference. The days have been a blur of meetings with partners, customers, and colleagues from around the world. We’ve talked about the new PureApplication System and updates across the software portfolio for connectivity, integration, business process and decision management, and application integration.
The Liberty Profile in the new WebSphere Application Server version 8.5 has been an especially hot topic. Conversations about that often go something like “It takes up less than 50Mb. Wow! It loads in 5 seconds. Show me! You can develop with it on your Mac. IBM did that?”
We’ve also had quite a few conversations about mobile and I’ve learned a lot.
Now I’m one of the executive leaders for mobile at IBM and I discussed it (briefly) on the main stage on Tuesday, gave an hour+ talk on “Top 11 Trends for Mobile Enterprise,” did press interviews and a panel with journalists, and challenged and was challenged by industry analysts on the topic. So I had a lot to say about mobile. But more than whatever I said, I learned an incredible amount of what our customers and partners are doing with mobile today. We also discussed how IBM’s new mobile products, IBM Worklight 5.0 and the IBM Mobile Foundation, could be essential to them over the next few years.
Here’s a bit of what I learned.
System integrators are looking to pick the one or two best mobile platforms on which to focus their efforts. The hybrid mobile app development model in IBM Worklight is very appealing because of its open standards and technology approach, and because it allows the creation of everything from pure native apps to those that are mostly HTML5 content.
Security and app management are critically important. Both IBM Worklight and Endpoint Manager for Mobile Devices, included in the IBM Mobile Foundation, have capabilities that address this. In some organizations, the BYOD, or Bring Your Own Device, movement is accelerating their concerns but also their need to react quickly. My suggestion is to consider security and device management as extensions of what you already do for your website, web applications, and hardware like laptops and servers. Don’t think of mobile as this odd new thing, consider it as adding on to what you do already.
Partners have started building mobile apps on Worklight, often without any initial guidance from IBM. This is wonderful. It reaffirms what we knew when we acquired the company earlier this year: Worklight is an elegant product that you can use to create mobile apps for multiple device types, connecting them securely to your backend infrastructure.
Mobile apps are not islands. That is, don’t think of a mobile app server as something that sits in the corner by itself while the rest of your infrastructure is elsewhere. We included IBM WebSphere Cast Iron in the IBM Mobile Foundation because we knew that customers and clients needed to have apps talk to enterprise applications like SAP but also services that run on clouds.
Infrastructure support for a mobile app could be very little or might need to be very large. IBM Worklight 5.0 will ship the Liberty Profile of WebSphere Application Server in the box. So you get small and fast. If you have an existing WebSphere Application Server ND deployment, you can put IBM Worklight right on top of that. This includes WebSphere running on System z mainframes usingEnterprise Linux.
Mobile can extend your business. If you have a web presence for retail, mobile can extend that. If you are a bank and have ATMs, mobile can extend some of those functions to mobile devices. If you have automotive repair shops, mobile can increase customer trust and loyalty.
Mobile can transform your business. Your first mobile apps will enable some core functionality, but later apps and versions may bring in social, analytics, commerce, and industry-specific elements. Don’t think of just an air travel app, think of one that helps me use my time in airports productively and eat healthily.
So to sum it up: mobile is surging for good reasons, customers and partners are asking the right questions, IBM Worklight is appealing to them as platform on which to build multiple mobile apps, we think the IBM Mobile Foundation is a solid base on which construct your mobile enterprise, and I’m looking forward to showcasing the many, many mobile apps created by and for our customers and partners at Impact 2013.
Before I had my current job involving themobile platform and product management for the WebSphere Application Server, I worked on Linux and open source. In March of 2011, I gave a talk at POSSCON called “Landmines for Open Source in the Mobile Space.” I gave a look at this again and thought a lot of it was still relevant.
You can see a video of the talk and get a link to the presentation here. What do you think still holds? What is out of date?
I haven’t posted the stats for browser and operating system access to this website since last July, but since I’ve been doing a lot of posting lately on mobile topics, I thought it would be useful to check the stats again. The numbers are from Analytics and are for the last six weeks of traffic.
|5.||“Mozilla Compatible Agent”||2.58%|
Browsers and Operating Systems
|Position||Browser / Operating System||Percentage|
|1.||Firefox / Windows||26.72%|
|2.||Chrome / Windows||19.19%|
|3.||Internet Explorer / Windows||13.42%|
|4.||Chrome / Macintosh||11.20%|
|5.||Firefox / Linux||5.79%|
Over the last 15 years of my career, I’ve seen several ideas or technology trends capture a significant amount of customer, press, and analyst attention. There was Java, XML, web services, SOA, and cloud. In and around all those were standards and open source. To me, the unquestionably hot technology today is mobile.
To be clear, I’m not talking about what happens in cell phone towers or the so called machine-to-machine communication. I mean smartphones and tablets. Those other areas are important as well, but devices are so front of mind because so many people have them.
is obviously playing a big role with its iPhone and , not to mention the half million apps in their App Store. and the ecosystem have produced even more smartphones and a whole lot of apps as well. Then there’s been the drama around HP and webOS, plus RIM and the PlayBook and outages. So we’ve got competition, winners and losers, closed ecosystems, and sometimes open ones. What’s not to love about mobile?
It can get confusing, especially for people trying to figure out their enterprise mobile strategy. They are looking for strong statements, for “points of view,” that will help them take advantage of mobile quickly but also aid them in avoiding the biggest risks. This is made even more interesting by employees bringing their own devices to work, the “BYOD” movement.
Not every employee is issued an official company smartphone and the devices they buy themselves are often better than what the company might provide. So they are saying “I’ll pay for my phone and my contract, let me have access to work systems so I can do my job better.” The recent ComputerWorld article “IBM opens up smartphone, tablet support for its workers” discusses some of what’s happening in this space at , my employer.
Next there is the whole web vs. hybrid vs. native discussion regarding how to build apps on the device itself. Should you write it to the core SDK on the device (native), stick to developing standards for continuity and interoperability reasons (web), or something in between (hybrid)? Which is faster and for what kinds of apps? Does the app cause a lot of network traffic or does it require great graphics? Are you willing to bet that HTML5 will get better and better? I’ve started discussing this in a series of blog entries called “Mobile app development: Native vs. hybrid vs. HTML5″ (part 1 and part 2). Your choice will involve tradeoffs among expense, time to market, reuse of web skills, portability, and maintainability.
What about management? If I bring my own device to work, how do the company’s apps get onto it in the first place and then get updated? Is there an enterprise app store? If I leave the company, do they zap my whole phone or just the apps they put on it? There are differences between Mobile Application Management (MAM?) and Mobile Device Management (MDM) that you need to understand.
Let’s not forget security, as if we could. A colleague of mine, Nataraj Nagaratnam, CTO of IBM Security Systems, told me the way to start thinking about that for mobile is that “a secure device is a managed device.” That doesn’t mean that all security falls under management, but rather you need to have device management to have a complete mobile security strategy. You also need to be handle identity management, authorization and authentication, single sign-on across apps, data loss protection, and all the things you need to worry about with the web today such as phishing, viruses, worms, social networking, VPN, etc. Security must be there but it also needs to be unobtrusive. Most mobile users will not know what a certificate is nor whether they should accept it.
Fundamental to managing and securing mobile devices compared to laptops is that people tend to lose their phones a lot more often than they lose their laptops. That’s a good starting point for thinking about the differences.
The Mobile Technology Preview encapsulates several technologies we’ve been working on in the labs. We’re making it available for you to experiment with it, comment on it, share your requirements for your mobile platform, discuss the pros and cons of different approaches to mobile app development on both the device and server side, and join the community to make it better.
We plan to update the Technology Preview as we add or change the feature set, ideally because of your stated requirements. In this release we’ve included
- an application server runtime that uses the WebSphere Liberty Profile of the WebSphere Application Server 8.5 Alpha (runs on Linux, Mac, and Windows)
- a notification framework
- basic management functions
- location-based security
- several samples featuring notifications, Dojo, PhoneGap, and a starter insurance app for handling car accidents.
The Mobile Technology Preview is available for Android devices.
I plan to use the tech preview from time to time to illustrate some of my discussions of mobile in my blog. I encourage you to try it out, track its progress, and influence its roadmap.
It’s been a while since I last put up some stats about what browsers and operating systems access my website at sutor.com. Traditionally,did well, followed by Internet Explorer, and then Chrome. The last two are now reversed.
Since much of my blog content has focused on content regarding open standards and open source, it makes sense for Firefox to have consistently led. Here’s the statistical story for the last month, thanks toAnalytics. I’ve focused on the top 5 in each category.
Browsers and Operating Systems
|Position||Browser / Operating System||Percentage|
|1.||Firefox / Windows||26.17%|
|2.||Internet Explorer / Windows||14.89%|
|3.||Chrome / Windows||12.57%|
|4.||Chrome / Macintosh||9.28%|
|5.||Firefox / Linux||8.07%|
It’s been effective for a week, so I guess I can spill the beans here and say that I’ve shifted to a new executive position within, namely to be the Vice President for WebSphere Foundation Product Management in the Software Group. I’ll have more to say about this over time, but basically it means that my team works with development, sales and marketing to drive the WebSphere Application server line and products like WebSphere eXtreme Scale. These are significant unto themselves but also underlie some of the most important software products that IBM sells. That’s not a totally inclusive list, but you get the idea.
Obviously we’re not just concerned about what we have already but also will be driving the plans for new products and the next generation of current ones that fit within that “foundation” area of the stack of IT software. Stay tuned.
Some of you might ask “didn’t you sort of do something similar about 6 or 7 years ago?”. Yes and no, sort of.
When I was last here in 2003-4, the world was just figuring out the commercial benefits of applying XML to business problems and web services was pretty new. There were several open source app servers and Oracle had not yet bought BEA and Sun. We were about to enter into the SOA era that led us to the current cloud era. Also, I had a marketing position, something I had never done before. This role is more of a blend of the business and the technical.
I learned a lot during that time but the IT world has evolved significantly, as have our products. We’re all right on the cusp of doing even more wonderful things with this core technology we as an industry have developed, so it’s a great time to move back and help drive it from the inside.
What does this mean for the blog?
- I will not use it as marketing vehicle for products, though I may provide links to things I think of interest.
- I’ll still talk about all those extraneous topics like gardening, sailing, cooking, and not playing the guitar well.
- The discussion of standards will probably increase again.
- I’ll keep talking about Linux and providing links to interesting articles, but more from a user or enterprise consumer perspective.
- The amount I’ve said about open source lately has decreased primarily because I’ve largely exhausted many of the discussion areas that interest me, and I don’t like repeating myself. There will still be some content about open source, but it will be at about the same level it’s been for the last six months.
- I’ll be ramping up the discussion of Java and other languages, programming frameworks, tools, cloud, mobile, runtime considerations, and application integration. Much of this has been present from time to time, but will increase.
As some of you know, I’ve been working for Making the World Work Better: The Ideas That Shaped a Century and a Company by journalists Kevin Maney, Steve Hamm and Jeffrey M. O’Brien. It is now available for preorder and will be be out in a week or so.for 28 years, though I was but a child when I started. Evidently it existed before I got here, and the full 100 year history is discussed in a new book called
From the book description:
The lessons for all businesses and institutions are powerful: To survive and succeed over a long period, you have to be willing and able to continually transform, guided by enduring values and a broadly understood identity. Over a century of change, IBM, came into being, grew, went global, nearly died, transformed itself… and is now charting a new path forward, embracing a second century that bids to be even more surprising than its first.
By the way, Linux is mentioned, see page 194.
Today a whole lot of companies, including BMC Software, Open Virtualization Alliance.Systems, HP, , Intel, , Inc. and SUSE, announced the creation of the
From the press release:
… today announced the formation of the Open Virtualization Alliance, a consortium committed to fostering the adoption of open virtualization technologies including Kernel-based Virtual Machine (KVM). The consortium will promote examples of customer successes, encourage interoperability and accelerate the expansion of the ecosystem of third party solutions around KVM, providing businesses improved choice, performance and price for virtualization.
The Open Virtualization Alliance will provide education, best practices and technical advice to help businesses understand and evaluate their virtualization options. The consortium complements the existing open source communities managing the development of the KVM hypervisor and associated management capabilities, which are rapidly driving technology innovations for customers virtualizing both Linux and Windows® applications.
KVM virtualization provides compelling performance, scalability and security for today’s applications, smoothing the path from single system deployments to large-scale cloud computing. As a core component in the Linux kernel, KVM leverages hardware virtualization support built into Intel and AMD processors, providing a robust, efficient environment for hosting Linux and Windows virtual machines. KVM naturally leverages the rapid innovation of the Linux kernel (to virtualize both Linux and Windows guests), automatically benefiting from scheduler, memory management, power management, device driver and other features being produced by the thousands of developers in the Linux community.