Thursday, July 10, 2008

Microsoft's New Expression Studio 2

{EAV:a66c5657b6ac2272}
Microsoft has released a new version of their Web Tools Suite, namely Microsoft Expression Studio 2.

This suite consists of 5 separate Web-related tools and is quite an achievement when compared to Microsoft's earlier attempts at Web development. Of course it is not to be compared with the actual Web Development capabilities in Visual Studio. The tools in Microsoft Expression Studio are more for Web designers rather than actual Web Software developers.

Expression Web is a Web Designer that has an interesting twist. Usually Web designers are used offline to design a Web site or page and then files are transferred to the actual Web Server so that they are available through a Browser. However, Expression Web has a different approach. If you use it for an existing Web site (that you have admin rights for), then you can do a sort of reverse engineering. There are two modes to use the software when accessing a Web site.

The Web site can be edited "live" or it can be edited offline by copying the files to local directory and publishing back to the server when editing finishes.

In both modes it is possible to see all the components of the web page/portal and edit the ones which can directly be edited, such as images, icons or other visual elements. It is also possible to apply style sheets (css styles) or add new graphical user interface elements.
Expression Encoder is an advanced video encoding and live broadcasting application that is especially geared to Silverlight projects. Silverlight is Microsoft's alternative to Adobe's Flash for interactive and also promotes the use of XAML.

Expression Blend is a Design tool that helps web designers to create user interfaces for Windows applications. It can help to separate the design aspects and development aspects (which should be handled via Visual Studio). It can handle JavaScript, XAML and other aspects of modern web development.

Expression Media is a tool that helps you manage your media (photos, audio files, videos, etc.) so that you can use them easier in your Web projects. It is not superior to tools specifically made for certain types of media (e.g. Picasa for photos or iTunes for audio files), but it is a useful tool if you want to stick to Microsoft.

Expression Design is Microsoft's new drawing tool to design graphics to use for Web development. Again, there are much better tools to do this, but if you use Microsoft's tools as a suite, it could be logical, since they have the same look-and-feel and also they can easily exchange data. Although not earth-shattering, it is yet another set of tools in the ever-growing software arsenal of Microsoft.

Wednesday, July 09, 2008

An Old Developer Tries His Hand at New Stuff....

After developing software for around 25 years, I have been mostly in a management role the last few years, and have not written a line of code for maybe the last 3 years. So, I started to feel like my software development skills were getting rusty and I was missing the boat with the latest technologies.

Suddenly realizing that I will be on my 30eth year in Software Development in September, I decided to try my hand at and report some new technologies that developers use now, and also start up one of my pet projects from the past.

Although I have been working with very able developers lately and I am familiar with most of the new development environments, my prime development era ended when we were still using Microsoft Visual Studio 6. I did some minor development when our guys were working with Visual Studio .NET and then VS .NET 2003. When we went into VS .NET 2005, I had stopped writing any code.

Of course I had to start this new project with VS .NET 2008, and furthermore use the Team Suite, as we are all transitioning to the Team Foundation Server to do Team Development more efficiently (or so claims Microsoft). It is also worth mentioning that the Team Suite would work when there is no Team Foundation Server, but it would then be more or less identical to the Visual Studio Professional version.

I installed the development environment on my Dell XPS M1730, which is a 2.6 GHz dual-core laptop with 3.5 GBytes of memory and running Vista Enterprise. I can tell you that you need a fast machine and a lot of memory to benefit from the new functionality in VS 2008.

When starting the VS 2008 Installer, you get a main screen showing three options: Install the software, install documentation and check for revisions. The last option actually runs Windows Update to check for any updates to Visual Studio (or other Microsoft software, for that matter). The second option is to install the digital version of the documentation (you do not get physical documentation nowadays).

The first option runs the Visual Studio Installer. The first thing the installer does is to check whether you have all the pre-requisite software already installed. These include the .NET framework and some other components, depending on what software you had installed before.

You have a couple of options that range from a totally automated, default installation to an installation that can be totally configured. You can select which languages or tools to include in the installation. It is advisable to skip those components that you do not plan to use, since the VS installation can take a lot of space.

My installation went eventless and I was able to start the Visual Studio 2008 Team Suite system. Onward ho!

Sunday, July 06, 2008

The Year at Harvard and Running Programs in Parallel (1991-1992)

I was now a regular part of the faculty and was concentrating on my research. I was still continuing my course on Programming, and was thinking of introducing object-oriented concepts to replace the structured concepts. Since I had become a professor in my own university, there was a pressure to visit an external university and get more experience to make up for the missing graduate work I should have done at a different institution.

My background was in simulation, so I started feeling around for a postgraduate research post to involve simulation. Fortunately, my PhD advisor was a graduate of Harvard University, and he contacted his Ph.D. advisor, who told him that he needed somebody to work in Simulation, in a new area called the Standard Clock Method. After a couple of e-mail messages to understand what the task was about, I got a letter of acceptance from Harvard to work for one year as a Postgraduate Researcher at the School of Applied Sciences.

The research team I was joining was being run by Professor Yu-Chi Ho. He was a very prominent Asian-American researcher who had once co-authored a paper with the famous Prof. Kalman and was one of the most famous researchers in the area of Discrete-Event Dynamic Systems. The team consisted of several Chinese doctoral students, an Indian, A Greek, one American and myself.

Prof. Ho wanted me to work in the area of Standard Clock simulation. This was a new technique that tried to use the properties of certain simulation problems to exploit the power of Massively Parallel Computers. A typical simulation problem consists of running several variations of a simulation model to find out how different parameters influence the outcome. Since simulation models are not analytical, there is no easy theoretical solution. Results are always statistical and can only be obtained by running many simulations.

Harvard had a massively parallel computer built by the company Maspar. Maspar was a newly established company specializing in building a massively parallel computer using the Single-Instruction Multiple Data (SIMD) paradigm. The first such computer named MP-1 was shipped in 1990 and Harvard was one of the first users. The MP-1 had 1024 processors, but also a central unit (which was usually another computer). The SIMD paradigm was based on a data-level parallelism, and enabled the computer to run the same instruction on all processors at the same time, but process different data items in all the processors. It had a C-like language called MPL (Maspar Programming Language). The difference was in the data structures, which could be declared as plural, and thus have a vector instead of a single variable. If you had two parallel variables and used the + operator with these, it just added all 1024 separate data elements that belonged to the 1024 processors in one instruction.

int plural num

would declare an integer variable called num which would always have the same value in all processors.

plural float plural x

would define a floating point variable called x which could have different values on all processors (thus the double plural keyword).

There was no possibility to communicate between the different processors, but the central unit would be able to connect and pass data to the individual processors.

Memory allocation was also a problem, since any allocation of memory would result in all processors having the same memory allocation. If different simulations required different memory sizes, then this would somehow have to be managed. I solved this problem by managing the memory myself centrally, pre-allocating memory for all processors even if they did not need it yet, but would pass the memory to the processor when needed.

The algorithm I was using used the properties of some probability distributions (in that they could be scaled linearly) to run different variants of the same simulation model, differing only in a small amount in one of the parameters. Then I could ignore some events happening in some of the processors and I would accept them in some other processors, thereby having 1024 parallel simulations with different outcomes.

I was remotely connecting to the Maspar front-end (a Sun Workstation) through my Sun workstation. This was pre-Solaris. I was now getting more and more used to Unix, having used previously the HP variant called HP-UX. This workstation was also being used for e-mails and searches. At that time, the popular search tool was something called Gopher. It was essentially a text-based tool that did not really look like the simple search tools we have today on our browsers. World-Wide-Web was in its infancy, the first popular browser (NCSA Mosaic) would be developed in two years and the establishment of Google was still 7 years in the future.

I had not really used C before I started using Maspar C. In a sense that would be good, since I was going to go directly to C++ afterwards, and would be immune to the deficiencies of the C language. Maspar C was of course different, since it was geared to work with parallel data structures. I was able to develop the Standard Clock algorithm and run many thousands of simulations on the Maspar MP-1. I would also find several bugs and help the Maspar team to fix these.

I also had a chance to run the software on an MP- with 4096 processors. Execution times would plateau after a few processors, thus providing the same performance regardless of the number of simulations run.

So, this year at Harvard, apart from the contribution to my academic career, gave me my initial experience with the Internet, World-Wide-Web and other features that are part of today's computing environment. After a year at Cambridge, one of the most European cities in the U.S., I went back to my university and to the teaching.

Sunday, November 25, 2007

Object-Oriented Programming and MacApp (1987-1991)

When we decided to program an Accounting Package on the Macintosh, we started to look into the development tools that Apple offered on the Mac. The main development activity in the company was still going on with Turbo Pascal on PCs and many new modules were being developed. Seeing that the development group was doing fine, I brought together a small group to work on the Mac platform.

A new paradigm in software development had started those days and was named Object-Oriented Programming (OOP).

Think Pascal was one of the good development environments you could use on the Mac. If you wanted to be on the safe side, you could use Apple's Macintosh Programming Workshop (MPW), since it would be adapted quickly to changes in the Macintosh OS.

Think Pascal was more user-friendly, as compared to the command-line-based MPW, and it was also much cheaper, since MPW targeted professional, rather than hobbyist, users.

Both Pascal languages were also object-oriented in the line of Object Pascal.

Think also had a C compiler called Think C, but it was essentially a precursor of C++ with the basic object-orientation concepts such as (single) inheritance.
Think technologies were later on bought by Symantec and the products were renamed to Symantec Pascal and Symantec C. Symantec could not adapt the transition of the Mac from the Motorola 68000 family of processors to the PowerPC chip developed by Apple and IBM. It lost its market share to the competitor Metrowerks and their development environment CodeWarrior.

Before all this happened, we had started to look into MPW Pascal and was trying to find out more about object-oriented programming. None of us had worked with the original object-oriented language, namely Smalltalk. After developing for many years using Structured Programming, this idea of objects and sending messages to objects was quite new for all of us. By introducing constructs called Classes in the code, we were re-defining the responsibility of control in the software by distributing the responsibility between these classes. Basically a class was a programming construct that kept both data structures and the logic to deal with these data structures in the same entity. When the programming logic was confined to an entity like this, it would usually be easier to debug. Objects were instances of a class that all shared the same features (data and logic) inherited from the class.

The logic or functionality embedded in a class would be open to the outside or within a class through methods. A method was essentially like a subroutine belonging to an object of a certain class and was invoked through a message. The message could have parameters (just like a subroutine) but would only be sent to objects belonging to class which defined the method.

If object-orientation was just limited to a redistribution of responsibilities, it would not have been so interesting. The interesting part came when you used the concept of inheritance and re-use. It was possible to take a class and create a variant of it by using most of its code, but modifying it to display a different behavior when required. This variant was called a sub-class and was a way to minimize the total code development by re-using the code for shared behavior.

Another difficult concept to understand was the concept of polymorphism. This involved the prospect of having the same function do different things in objects that are related but belong to different classes. So, if you sent the message to different objects, the behavior could be different.

After understanding the basic principles of OOP, I noticed the difficulty of using this in the current visual environment of the Macintosh. Most of the objects you would have to deal with in a visual environment were Graphical User Interface (GUI) objects and these had to have a close association with the internal Operating System functions. Most of the OOP books used classes which were quite abstract and generic, but it was difficult to apply this in a GUI like the Mac OS GUI. Here, the concept of Application Frameworks came into the picture. Basically, Application Frameworks gave you the necessary classes to deal with the GUI or other aspects of the Operating System such as the File System, peripherals and so on.

Apple had created an Application Framework called MacApp that helped developers deal with the complexities of GUI-based development. It really made life easier, since you just had to pass the right messages to the right objects and GUI actions or OS actions just followed.

To get a good hold of MacApp, I and a colleague of mine went on a one-week MacApp course to the European Headquarters of Apple Computer in Paris. The instructor (a Frenchman of Polish origin) was really good in OOP concepts and MacApp, and did a lot of hands-on. I was pretty impressed by Apple's HQ, which was very modern and had a lot of automation.

Everything in MacApp was derived from a top-level class called TObject. Since you could use events (called Apple Events) to communicate between applications and from an application to the OS, an important class you used frequently was TEvent. Everything that represented a command to be executed would be encapsulated in a class derived from TCommand. Documents would be represented by sub-classes of the TDocument class. Every GUI or Operating System element would be derived from one of the top classes.

I did my Mac development using a Mac Classic, then switching to a Mac SE. However, it was clear soon that there was not so much of a local accounting software market for the Mac, since the market was already dominated by several accounting packages for the PC and Macs were in the decline.
During our brief adventure with the Mac platform, I had decided that I did not really enjoy developing accounting and similar financial software and made up my mind to leave the company and continue my academic career. This move also signalled my return to the Mac development environment, since academic environment gave me some freedom to work on any platform that I wanted, although most of the university network consisted of PCs.

I went back to the university as an assistant and completed my PhD thesis (still working on the PC, of course, and with Turbo Pascal). In the meantime I had become the de-facto manager of the department labs, which had some PCs, a few Unix workstations and a few Macs.

After the Thesis dissertation, I was offered an Assistant Professorship. Once I joined the faculty, I asked for a Macintosh to use personally. If I'm not mistaken, I had got a Macintosh IIci. IN a few years I would update this and get a Macintosh Centris 650, a "very fast" computer which had a 25 MHz processor!


Most people were using PCs or Macs, and they were also using these personal computers to connect to the university mainframe or various Unix workstations in the different departments. The IT infrastructure of the university in those days can best be described as "chaotic", since every department was managing their own subnetwork in their labs, and the best they could hope was to provide some entry points to their networks and also provide Bitnet mail support (for those of you who are not familiar with Bitnet, it was the precursor to the academic Internet and existed from 1981 to 1996, then disappeared into obscurity).

One of the courses I would teach would be the Principles of Programming, and I would have to teach Pascal. I started to develop detailed course notes, especially emphasizing the use of data structures optimally and also relying on course projects to give students a practical insight into programming. Since most of the students had a PC or used a PC lab, I had to continue my lectures based on a PC. However, this did not prevent me to do my own research on Macs.

I had continued to get more and more familiar with the MacApp framework. In the meantime, Apple had released the 3.0 version of MacApp implemented in C++. This was a good opportunity to get into C++. I had had very little exposure to C, working with object-oriented Pascal most of the time. In a sense this was good, since I would not bring any "bad habits" that would be typical of C programmers. I also enjoyed the pure object-orientation and strong typing of C++, since these were rather weak in Object Pascal implementations. MacApp would be my main development environment in the near future, but some of my career moves would bring some surprises to my software development adventure.

(Microsoft incorporated a lot of the MacApp concepts in their Microsoft Foundation Classes library later on, and application frameworks became a regular feature of software development environments. Coincidentally Microsoft classes would use a C in the name, similar to the T used by MacApp classes as a prefix).

Sunday, October 22, 2006

Artificial Intelligence, Or The Lack of It (1984-1987)

When I was at the University, doing my M.S., I started to look into Artificial Intelligence (AI) seriously. Although it was not really a part of my major, the topic itself was so interesting that I wanted to understand it better. Since I was a sort of experienced chess player and played for the university team briefly, I looked into Chess programs.

Chess was one of the early targets the 1960's Artificial Intelligence community looked into. Early AI researchers thought that they would have a World Champion Chess Computer program in a decade or so. This was only partially accomplished when Deep Blue beat Kasparov in 1997. However, the algorithms and approach used in Deep Blue and similar successful computer programs hardly looked like what AI researchers thought. Most of them use a game tree and
various strategies to optimize the evaluation of alternatives in the game tree. After reading a couple of computer chess books, I gave up on this.

Other forays into the AI field were things like "Eliza", which was a program simulating a human therapist, in essence an incarnation of the Turing Test. The early optimism of AI researchers soon dissipated and they started looking into more realistic goals such as Knowledge-Based Systems (which rely on a Knowledge Base, or a set of rules, rather than sophisticated algorithms imitating human thinking).

In the same time period, I was attending informal weekly meetings organized by a publishing house that covered a variety of topics in literature, culture and other things. I met a fellow who had studied philosophy at Berkeley but did not finish his Ph. D. The legacy he had from Berkeley included a lot of information about the Berkeley professor and philosopher Hubert Dreyfus, who is accepted to be one of the foremost critics of Artificial Intelligence and Computer Systems. I had the chance to read his book What Computers Can't Do? A Critic of Artificial Intelligence, which basically explains that it is not possible for a computer to duplicate human behavior perfectly, the reason for this being that humans have a body and are immersed in the physical world, and also have physical experience in this world, unlike computers, which do not experience the world in the physical sense. It opened my eyes to the limits of artificial systems and long-term expectations from Information Technology.

I got a course on Logic Programming and learned the Prolog language. This language is based on predicate logic and represents "facts" (atomic statements) and "rules" in a syntax similar to predicate logic. After writing a couple of fun programs, such as a Medical Diagnostic program that used probabilities and rules about medical symptoms, I did not find too much of a use for this specific language and went back to more traditional languages.

Tuesday, July 11, 2006

Curse of the data structures: (1986-1987)

Once it was clear that we would have to develop financial software if we wanted to get into the market, the way ahead was more or less determined. I had got accounting lessons as part of my education and had a quite good grasp of accounting principles.

At that time, most companies developing accounting software used really archaic development environments left over from mainframes or minicomputers. They also lacked a lot on the user interface, since most of their interfaces were line-based and could handle one line transaction at a time. Having been into complex data structures and using good principles in structured programming, I was interested in utilizing these in real-life programs. Using Pascal - a modern programming language at those days - had its advantages.

One challenge we had at the time was to be able to represent the account codes in a proper, hierarchical data structure. Because the way accounting systems were set up, we had to have a tree-based structure, and obviously not a binary tree. Binary trees were easy to handle, but inserting new elements and re-balancing the tree took a lot of time.

Each main account would have an unlimited number of sub-accounts and we had to access these really quickly in memory to prepare ledgers and similar accounting documents in a relatively short time. At that time we were not thinking of using ready-made databases, first because they were quite expensive and we could not afford the license fees, and also because most of them did not perform very well anyway. These limitations quickly brought us to the use of n-ary trees in memory. While searching for a good way to represent these, we were introduced to the concept of B-trees. These were superstructures arranged in tree format where the upper leaves would correspond to index pages which would store a link to another page in the tree that stored keys in a certain range. In a sense the top-level pages corresponded to the "Table of Contents" in a book which gave us fast access to the lower-level pages. At the lowest level, the actual keys and the corresponding data structures were stored. One problem with this structure was that it had to be re-balanced from time to time, which could take an unknown amount of time to process.

Since we did not have access to a database, let alone a relational database with SQL support, we had to work with flat files. In addition, we used linked lists in memory, through which we handled the relations by keeping record pointers between consecutive record belonging to the same list and saving each record in a sequential-access file. The "curse" of this data structure was that if something happened while saving the data into these flat files (such as electricity cutoffs, which happened regularly those days) then the data would be invariably corrupted, since links would point to invalid entries. This could happen when there were programming errors as well.

We released the Accounting Module and suddenly encountered a constantly increasing demand. The relatively easy user interface that Turbo Pascal enabled and the fast operation due to the use of clever data structures in memory differentiated the product from its competitors. However, the flat file structure was causing frequent corruption of data structures and we had to bring in files of the customers and manually fix them. In time we would develop automatic tools which could make this task of fixing corrupted files a bit easier.

While transforming ourselves to a provider of accounting systems, we felt the need to grow. Up to that time, all the programming was done by ourselves. We got a secretary, then a support person, then an office boy, and right afterwards two young programmers.

In the meantime a major computer seller had approached us, asking us to develop a special version of the Accounting Module for them exclusively. They also asked us to add a couple of other modules, namely Inventory Control, Invoicing, Order Processing, etc. They would provide the financial expertise, assigning one of their experts to us for requirements capture, but they wanted this product to be bundled with their computers. It was an attractive proposal for us, a fledling young company, and we signed a contract without thinking too much about future implications. Since we had some deadlines with monetary effects, we started working hard on this special program. We would find out later that the contract implied quite restrictive terms for all of our software development. We produced the software and started testing, supported by the financial experts of the ordering company. As we were supporting their own developers who would eventually take over the software, we were also building a new software development group for our future products. We built similar modules with information from internal sources or other experts, and prepared to market the new Integrated Suite of Financial Applications.

We were very surprised when we got a formal warning through a Notary, asking us to stop all software development in the area of financial systems. The warning was drafted by a very well known Corporate Lawyer whose name frequently appeared in the press. When we contacted them , they said that they just wanted to protect their investment. Although we agreed that we had signed a contract to produce them the software, we did not intend to stop all software development in this area, since this was the most sought for area of software. Through a friend we got ourselves a good corporate lawyer, who contributed to public cases a lot. He would take on the task of opposing big corporations when they tried to bully smaller rivals or individuals, so he was a man of principle. He read the contract and told us that, although the contract could be interpreted as covering all software development, he thought this would be a very restrictive contract and thus would not normally be allowed under normal contract law. It was clear that we had been misled into signing this contract, because of our lack of understanding of contract law. He sent a counter-warning and the battle was on. After many months of warnings and counter-warnings, they figured out that even if they could have an injunction for us to stop all software development in this area, it would be very difficult to enforce, furthermore they needed us to support their exclusive software effort. In the end we revised the contract to provide more support and they accepted the removal of the restrictive clauses.

It was now possible for us to increase the scope of our activities, especially since the contract resulted in a much-needed cash influx to the company. Building on the success of our Accounting Module, we proceeded to develop the Integrated Financial System, with Inventory Control, Order Processing, Invoicing, Checks and Notes Management and eventually Materials Management (a precursor of today's Enterprise Resource Planning systems). We would now establish an official Software Development Department (with me as Director) and populated it with several young software developers. We also had a production department which constantly produced the packages (one by one writing the floppies and packing them up, floppies were 5 1/4 " and 3 1/2 ")

One challenge was to convince major PC sellers to bundle these software with their PCs. It was very well known that companies saw software as a part of the computer and did not want to spend extra money on software. It would be an advantage to include the software in the price of the computer (with some discount) and show it as part of the complete system. However, most of these PC importers had agreements with other developers. It took us several years to convince them - one by one - that we had a higher quality product, better GUI, better customer service and eventually they started selling our software, sometimes exclusively, sometimes as an alternative to other companies' financial software.

Another challenge was to provide training and good user documentation. This would eventually grow into an industry, with companies specializing in providing training in these tools, but in the beginning we had to do all these ourselves. The company was growing at an uncontrollable pace. The company office now held two full stories in a building, whereas we had started with a single room, maybe 5x4 meter square.

Once the sales of the Integrated Package started growing, we could also go back to one of our earlier goals : We produced a Macintosh version of the Accounting Package, albeit selling very few of it.

But working once more on the Macintosh would have an unexpected effect on my software development skills: I was introduced to a totally new concept in programming, namely that of Object-Oriented Programming.

Sunday, April 23, 2006

IBM PC & Pascal-Early Years (1984-1986)

After a brief trial with Apple computers, we had changed our direction and instead decided to develop software for the IBM PC & compatibles market. (At that time PCs were known as IBM PCs or IBM-compatible, this description would disappear after a couple of years, probably due to the existence of hundreds of PC producers and the slow demise of the PC division of IBM).

We had a friend still studying at the university we graduated, and he had been brought to head a PC importer company that his father had started. We talked to him and he was generous enough to offer a PC to us with a good price and also flexibility on the payment. It was a so-called "No Name" PC basically produced by assembling cheap Taiwan-produced components. We even assembled it ourselves, but we must have done a sloppy job, since a heavy jolt to the table the PC stood on would make it reboot.

The IBM PC & compatibles used the Intel 8088 processor. Microsoft developed the DOS operating system by taking over an early version and licensed it to IBM to use in its own computers, renaming it to PC-DOS. Right afterwards, they developed a variant of it called MS-DOS, and marketed that to the IBM-compatible PC developers. Later on, Digital Research also produced a version of the operating system called DR-DOS.



Going back to DOS from the Lisa or Macintosh operatings system was a step back. This was a completely textual user interface, thus it could only use the keyboard (and not the mouse, at that point)

I used my previous experience in the university to develop the basics for a Project Planning system using the Critical Path Method that we would try to sell. It was a simple tool, but it did the job. I had written some clever algorithms to solve the CPM problem rather quickly on these early PCs, and the user interface was not so bad (It was obviously textual. IBM PCs would have to wait until Windows 3.1 was released, to have something remotely resembling the Macintosh user interface). It was developed with Turbo Pascal, the newly emerging Pascal compiler for the PCs. It was marketed by Borland, which was founded by the notorious Philippe Kahn. Borland also produced tools like SideKick, which was a Terminate-and-stay-resident software component (the only way to run more than one program under the DOS operating system)



Turbo Pascal had a good interface and also contained a nice debugger that helped put breakpoints and follow up the code.

We tried to market the software to construction companies who would use it for their construction plans, and ended up paying a lot of money for Planning Software. This was obviously before Microsoft Project was released. Some of the larger construction companies which were involved in overseas construction projects in places like Libya and Saudi Arabia were using very professional products like Artemis and Primavera. (I was surprised to see that these products are still available today) These were obviously very expensive, so our target was smaller construction companies who could not afford these and maybe could prefer a local software localized in the language and customized according to local needs. However, although these companies were interested in the software and were impressed by its simplicity, they asked whether we had accounting software. Computer use in the country was just flourishing and these companies, having a limited automation budget, preferred to automate their Accounting System and maybe save a lot of money by controlling their expenses and also reducing the amount of money they had to pay to accounting firms. We heard this comment over and over from many potential customers.

We had to find a better strategy. Now that our initial attempt to market a Project Planning Software had failed, we analysed the situation. We would have to go into the Accounting and Financial Software market, although none of us really enjoyed working in this area. I had had a course in Accounting and was familiar with the requirements, but it still did not seem like a cool area to work in.

In the meantime we built some custom-made software for a Medical Laboratory. It was basically a Patient Management system which recorded the patients, which had a list of standard tests to be applied to the patient and let the user select these tests. It enabled the calculation of the bill and printed the bill, as well as printing the test results when they were available. It had some limited capability for printing graphical results for some of the tests. Developed over a period of more than one year and extensively tested both offline and in action, it brought us some well-deserved income and also let us continue our quest to become an important software comany despite initial setbacks.

This was only a temporary solution, and we felt we had to quickly go into the Financial Software market.

Lisa & Macintosh (1984)


As I was working for this Apple distributor, I started using the newest computers that Apple came up with, more and more variants of Apple II and Apple III. But I think I was not ready for the shock of my life when I encountered the Lisa.

First of all, this computer used a device called the mouse, with which you could draw graphics on the screen, and also navigate controls. It had 5.5 inch floppy drives and a 5 MByte ProFile hard disk (the same used on the Apple III).

The second thing I noticed was that when you pressed the On/Off button, it did not immediately shut down, but first saved some settings or open files before it turned off.

This was really a revelation and I started to go into the bowels of the Lisa to be able to support it properly. However, in a couple of months I got to see the first Macintosh, which we were going to display in an industry show, and suddenly the Lisa was forgotten. It would die a slow death, due to its high price and the emergence of the Macintosh as Apple's primary platform.

I think I must have used the original Mac, since it was the middle of 1984 when I had the opportunity. After a couple of months with the company, I was not very happy with my Manager, who wanted to keep the machines under wrap and not let anybody touch them. Support was going to be difficult without the opportunity to play with the machines extensively.

I and a couple of friends quit the company and founded a firm of our own. We were going to develop software for the Macintosh and be rich and famous. One of our friends bought a Macintosh with money loaned by his father and we went into business.

We started going after our old firm's customers and tried to develop custom software for them, but there was a difficulty. We hired a one-room office in a business center notorious for shops selling computers, electronic parts and other items. Our old firm was not really supportive, since they were angry at us for quitting as a group. After 1 year of struggle, we gave up, bought a no-name PC and went into the PC Software market.

I would, however, continue using a Mac for the rest of my life, although our first trial to go into the Mac market failed miserably.

In the meantime a lot of the friends quit and we were left with three partners still believing in the feasibility of this software venture.

Friday, April 14, 2006

Apple II, Apple III and Pascal (1983-1984)


When I saw the Apple II, I instantly liked it. Now, about 23 years later, I think it was more likely the Apple II Plus, since it already had floppy disk drives (5.5 inch ones) and it came loaded with Applesoft Basic and UCSD Pascal.
I had started using UCSD Pascal on the CDC Cyber at the university and it did not hurt to use it on this new personal computer as well.

Pascal was another example of a Structured Language, but it was quite different. The first difference was apparent from the fact that you could define complex data structures, as compared to arrays that Fortran allowed. UCSD Pascal also had the additional feature that it was compiled to an internal representation called the p-code, rather than compiling directly into machine code. This was a precursor of the Java Virtual Machine or Microsoft's .NET Framework, albeit much simpler. It was easier to debug compiled code in p-code than debugging it in machine code directly.



One of my brilliant computer moments was when I first saw Visicalc, the world's first spreadsheet program. It was a rather ingenious invention, but it seemed so clever to me at that time that I remember admiring the inventor, Dan Bricklin. It was primitive by today's standards, and the user interface (on a text-based computer) was not very impressive, but to be able to write formulas into boxes and see the values change based on related cells was really exciting at that time.

Soon afterwards I started using Apple's "Business Computer", namely the Apple III. This was by far the most advanced personal computer at the time, and although it contributed later to a substantial commercial and marketing failure at Apple, for me it was the personal computer that was most suitable for proper programming. It could have up to 256K of RAM by switching between 64K of memory. It even had a hard disk (called ProFile) of 5 MBytes!



I was studying Project Management at the time, and it did not take me so long to write a Project Management (using the Critical Path Method) program on the Apple III using Pascal. We used it for an academic activity and it was received with some interest. Pascal was certainly better than Fortran, since it took programming to a much more structured state and got rid of the chaoic constructs like the GOTO. (Admittedly, if you use too many IF or CASE statements, you can produce spagetti code in Pascal as well)

I started working for an Apple distributor, and I was responsible for support, mostly in the software area. After office hours, I would stay and program a simulation compiler in Pascal on the Apple III for my M.S. thesis. This would usually bring me close to midnight, and the whole cycle would start the next morning. But I was a good Apple III user and programmer at that time.

The IBM PC was introduced in the same year as the Apple III and it would be an immediate success.

Sunday, April 09, 2006

The Personal Computer (1982-1983)


My transition from punched cards to a terminal had been quite rapid, in the space of 4 years, but that was nothing like the transition to personal computers that would follow.

The year I graduated and started to work as a research assistant, the department bought some personal computers. These ran the CP/M operating system and could run the early games like PacMan. The operating system was a precursor for the PC DOS but was simpler. I seem to remember that these personal computers were used for a brief period, about 2 years.

At the same time some of my friends had bought the Sinclair ZX Spectrum, a cool home computer that you could connect to your TV, load programs from its cassette player and enter data from the built-in keyboard. The keys were like bubble gum, the cassette frequently failed, but this was an amazing improvement from the mainframes. The first game I played on the ZX Spectrum was a game implementation of Tolkien's Hobbit. I had done the first translations for the ZX Spectrum manuals for a petty amount for the importer of the computer.

Around the same time period, I accompanied my MS Thesis Advisor to visit a company to see a new personal computer, the Apple II.

Thursday, April 06, 2006

The Fortran Years (1978-1982)

I had written about how I really started Fortran programming, or more accurately, programming as a new skill. It was an interesting language, but the programming environment was far from ideal. We had a Univac 1106 mainframe at the University, and we had to use punch cards, such as this one:
The punch card had 80 columns and these had specific meanings in specific languages. Fortran also had this. For example, if you punched the hole in column 6, this meant that the card had a continuation of the statement in the previous card. Columns 1-5 were used for "labels". Putting a "C" in column 1 meant that this was a "comment" card put for documentation and thus would not be processed by the compiler.
Initially we had punchcard machines which punched a hole directly when you pressed a key. Then we had better machines which had a 80-character memory, so you could type the whole card and do corrections if you made a mistake, and the machine punched the holes when you finished. However, it was still difficult to use, since there was no screen to show what you had typed, so you had to guess.

One other problem with using the computer center was that you submitted your cards comprising the program (carefully wrapped with an elastic band so that the system operator would not accidentally drop them and screw up the order of the cards, which would be a disaster), and you got your output between 5 minutes to more than 2 hours later, depending on the load of the mainframe. When the center was crowded, this usually meant you could maybe get two runs per day if you were lucky. You tended to write your program on a paper and manually go through it several times before you submitted it, to reduce the risk of waiting 2 hours to find out that you had made a simple mistake and the program did not run.

Fortran was one of the languages which conformed to the Programming technique called Structured Programming. It was developed in the 60's and 70's and became very popular, especially with the ascent of a lot of CASE tools supporting it. The idea was to break a program into structured pieces (usually called "subroutines") in order to reduce complexity. It also allowed one to define "data structures".

Fortran also had a control flow construct called the "GOTO" statement which allowed control to be passed to a line within the code with a given line number. Dijkstra wrote a famous article named "The GOTO Statement Considered Harmful" published in the Communications of the ACM which discussed the dangers of using too many GOTO statements in the code, since it would turn the code into unmanageable spagetti.

Because Fortran had structures for arrays, vectors and other multi-dimensional mathematical constructs, it was pretty useful for engineering programs. The fact that there were many optimisation or numerical analysis packages developed in Fortran also helped the popularity of the language.

When I wanted to develop a kind of database for my books (remember, this was before relational databases or Excel spreadsheets!) I had to use Cobol, since Fortran was really weak in reporting. Its arcane, fixed syntax was strange, but those fixed cards could be shared from one program to another, thus it was quite easy once you had the basics.

One development around 1981 was that we had "floppy disc readers" installed. These used 8 inch discs (which were really "floppy" when you waved them). Suddenly it was possible to write the whole program using a visual editor (albeit quite primitive) and write it on a floppy disc, and send it to the mainframe yourself rather than asking an operator to do it. You still had to wait for the output to see the results but this was an enormous improvement.

At the same timeframe the Univac was replaced with a CDC Cyber mainframe. Moreover, we had terminals! I was introduced for the first time to interactive use of computers, albeit with a simple textual terminal.

I soon discovered The Adventure Game. This was a textual adventure game that went like "You are in a maze of twisty little passages, all alike..." and asked you to give textual commands that allowed you to move in a complex geography and interact with objects. Of course there were dwarves appearing out of nowhere and hurling knives at you, and you could easily die by falling into a chasm if you took the wrong turn, but the concept of playing games on a computer was just so novel.... I would later on discover that most of the elements of the game were taken from Tolkien's Hobbit and Lord of the Rings.

At night the mainframe was used to support commercial companies by running their jobs when there were no or very little student work. Since I had befriended all the operators, I would sometimes stay late and run my jobs along with the commercial programs, and would enjoy getting the output in a minute rather than the normal 2-3 hour timeframe. During those long programming years, I was part of an optimisation project for the municipal bus company where we would run huge Linear Programming models to optimise the bus allocation to different routes in the city and would run a post-processor to present the results in a humanly readable fashion.

I don't quite remember how many programs I wrote with Fortran, but by the year 1982, I was an expert programmer.

Sunday, April 02, 2006

The Beginning (1978)

I started my university studies in 1978. I remember going to my first freshman programming course. My study field was Industrial Engineering, and just like any other Engineering student, I had to take a course in Fortran.

The instructor was actually from the Nuclear Engineering Department. He had a peculiar voice and for many people in the class, it was very difficult to follow the class, especially if you were not really sure what -if anything - this new skill meant for you.

But for me, since I was always a kind of mathematician when I was in high school, this new way of logically structuring your thoughts so that you could make a computer follow a determined path based on user input was very in teresting. It immediately appealed to me, maybe because it was a way to create something which you could totally control (I found about uncontrollable software much later...)

It must have been a week before I noticed that there were only about 5 people left in the class. We were basically the only ones left who could follow what was going on and could actually go through some examples.

I entered the Computer Center Friday evening of the first week, and my almost-30-year in Software Development started.

My -almost - 30 years in Software (Preamble)

In two years I will be celebrating my 30eth year in Software Development. Of course there are many people who have spent much longer period doing things they are good at, but Software Development is an area that has undergone so many changes that some of the stories we "old-timers" tell would seem like fairy tails, at least they do for my children.

The developments in Software Development were in parallel to the developments in computers, but not necessarily in the same pace. People were more willing to change their computers, especially in recent years, than developers were willing to change their methods or languages or development environments.

I will now post a series of blog articles that will present the history of software development, from a personal perspective. I hope this "fairy tale" is interesting to young or aspiring software developers that would like to take on this difficult profession.

Sunday, March 26, 2006

An Australian Sensation : Enterprise Architect

I'm not referring to a new Australian swimmer or cricket player. The "sensation" I mention here is a relatively new CASE (Computer-Aided Software Engineering) tool produced by an Australian company named Sparx Systems, which most of you have probably not heard about.

I have personally used many CASE tools over the course of the 28 years of my software development career, but most of these were either tools helping a single aspect of software development (design, development, etc.) or consisted of a loose collection of tools which did not integrate very well together. Even the most well-known tools like the Rational Suite was a collection of tools that were crudely tied togeher and fequently would require manual massaging of data to go from one tool to another.

When I discovered Enterprise Architect, it was really a pleasant surprise. First of all, it covered almost all phases of Software Decelopment, from Business Process Modelling to Analysis, to Design, to Implementation, to Testing, to Project Management and so on. At that time the tool supported UML 1.1 diagrams. The second surprise was the price. Even the most expensive (Corporate) version was about 250 $. The most important surprise was the fact that all the functionality was in a single module, with everything in one neat package. The fact that it also covered Requirement Management also helped.

Over the 4 years since I started to use (and enforce the use of) EA in my software projects, the tool has improved a lot. The current version (6.1) supports UML 2.0, has an optional module that provides forward and backward engineering automation and advanced modelling functionality like WSDL modelling, etc. now supports SQL Server (instead of the internal MS Access-based format), can be used in a multi-user mode and can generate highly customisable documents.

I would like to recommend the tool to any developer who is serious about software development but does not want to dole out a lot of money for tools to do this. I have not been able to compare it to the new Microsoft Visual Studio Team Version, but I have tried some of the forward & backward engineering functionality the MDG add-on brings to the tool, with Visual Studio .NET Professional 2005, and it looks very promising. Its Requirements Management functionality is also very commendable, since you can use requirements as separate objects in your UML diagrams and provide full traceability as well.