Graphical Thread Dumps 10

Posted by F 24/11/2005 at 23h00

I am surprised by the high number of Java developers I meet that do not know what a Java Thread Dump is or how to generate one. I find it a very powerful tool, and it is always available as part of the JVM. I haven’t played much with Java 5 yet, but it comes with jstack, a new tool that makes it easier to generate thread dumps.

Earlier this year, I was working on a load test for for a well-known airline. We were tunning the environment all we could, monitoring and profiling to know where to focus our optimization efforts. The solution involved a fairly high stack: Apache httpd, WebSphere, FuegoBPM, Tibco messaging, Oracle RAC.

The system was holding load pretty well up to a certain point in which it immediatly halted and stopped processing new requests. Every time we run the load testing scripts we experienced the same symptoms. Not even the official testers –with allegedly powerful testing and monitoring tools– were able to identify the cause of the problem.

So, I decided to get a few Thread Dumps of WebSphere’s JVM. On Unix, you do ”kill -3 <pid>” and the dump goes to WebSphere’s native_stdout.log. We inspected the dumps but couldn’t identify dead-locks or any other obvious anomaly, although the answer was right before our eyes.

AT&T Text to Speech

Posted by F 22/11/2005 at 13h25

A co-worker pointed me to AT&T Natural Voices, a Text-to-Speech research project from AT&T. I tried the online demo and I’m really impressed.

It’s been a long while since I last tried a TTS system, so may be I’m just late to the party… but this one produces very natural speech. I tried a couple of English and Spanish voices, and all seem very real.

I knew about the Festival project, which provides a free speech synthesis system, but if you check the online voice demos you will probably agree that it’s not on par with AT&T’s.

Baby Pilar is here 1

Posted by F 19/11/2005 at 10h40

This is old news already, and although it’s not about Software, it’s definitely about Development :-)

I’m happy to share that Pilar, my second daughter, was born last Monday 14th, a couple of weeks ahead of schedule –and on budget ;-).

She has jaundice, so she is under photo-therapy. Fortunately, this can be done at home, and should only last a few days.

Micro and Blind Optimizations 1

Posted by F 03/11/2005 at 22h16

Yesterday, a good friend of mine and ex-coworker contacted to me to share his frustration.

(he hates to be called “Polino”, so I won’t.. doh!)

He finished a software solution for a customer, and now an expert is reviewing his Java code.

The expert code reviewer insists on small performance optimizations, but he is way off target. He wants to micro-optimize, and to do it blindly.

For example, he reported that the following code was doing “inefficient String concatenations”:

String myString = "Some text here "+
                  "Some text there "+
                  "Some more... ";

And that this was an “inefficient way of creating Longs”:

 myList.add(new Long(1));

These examples are probably well optimized by modern Java compilers. But even if they weren’t, they probably don’t affect much to the performance of the system as a whole.

Oracle DB Express 2

Posted by F 01/11/2005 at 16h45

I just realized about an interesting move from Oracle. They released a Beta version of their new Oracle Database 10g Express Edition.

This Express edition is free of charge. It is free not only for production use, but also for distribution.

These are the limitations it has: * It restricts itself to use only one CPU * Only one server and database instance (SID) per installation * Database size limit of 4GB

Looks like a good deal for ISVs, developers and small shops. It’s a good way for getting more mind-share among small software companies and younger/future developers.

I tried it on my Linux laptop and got it up and running in a couple of minutes. The Express name does not make it any lighter though: it still consumes a good chunk of RAM, and the database instance allocates 1GB of disk. So I’ll stick to PostgreSQL for powering this blog :-).