Tuesday, December 25, 2007

Cruising around with Android



Happy holidays, my fellow code slingers! For the past three weeks, I've plunged myself into the world of Android, the new mobile-platform SDK from our friends at Google. I'm really excited about the possibilities of this platform. The telephony API will be broken wide open for our fun and amusement.

Do you want to sync your phone contacts to a remote data source? No problem, use the bundled HttpClient and write a service. Do you have a buddy that likes to drunk dial you at 3:00 a.m.? Why not have a service that can filter his calls out during certain hours. Things like this would be hard or downright impossible in the past. Now we have Android!

I thought I'd do a little tutorial showcasing the Location API that ships with android. This is definitely a fun API to play with, so allow me to introduce the TrivialGPS application.

The aptly named TrivialGPS application will display a MapView, and center it on our current location as we move through the bay in "real-time". We use the observer pattern with the LocationManager, so our application can receive updates about changes in our current position and update the MapView accordingly.

At this point, I'm going to assume that you have either looked at the Android tutorials and have at least a rudimentary understanding of the framework, or that you're so damn intelligent that you don't need to.

First off, your manifest file must declare a couple of permissions for this application to work.

  • INTERNET
  • ACCESS_FINE_LOCATION
The TrivialGPS is a single activity that displays a map, so to do this our activity must extend MapActivity.

We need three member fields: The MapView which we will be displaying, a MapController which can center the map, and a LocationManager which we can query for providers and request geo information from.

public class TrivialGPS extends MapActivity {

private MapController mapController;
private MapView mapView;
private LocationManager locationManager;

...

Our onCreate method starts out very simple. We create a new MapView, set the zoom level to 22 (pretty close up, so we can see the streets), store a reference to the MapController, and then tell Android to display the map. We'll be revisiting this method a little bit later.

@Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);

mapView = new MapView(this);
mapController = mapView.getController();
mapController.zoomTo(22);
setContentView(mapView);
}

In order to receive notifications about location updates, we need a LocationListener. A LocationListener is basically a callback that will be executed whenever there's a location change event. The simplest way to do this is with an inner class. The class must implement several methods, but the onLocationChanged method is the only one that will actually do any work. Will receive the coordinates, convert them to microdegrees, int a GeoPoint instance from these, and then uses the MapController to center the view on the new point.

public class LocationUpdateHandler implements LocationListener {

@Override
public void onLocationChanged(Location loc) {
int lat = (int) (loc.getLatitude()*1E6);
int lng = (int) (loc.getLongitude()*1E6);
GeoPoint point = new GeoPoint(lat, lng);
mapController.setCenter(point);
setContentView(mapView);
}

@Override
public void onProviderDisabled(String provider) {}

@Override
public void onProviderEnabled(String provider) {}

@Override
public void onStatusChanged(String provider, int status, 
Bundle extras) {}

As promised, we now return to onCreate method.

The first thing we've added is a call to initialize the LocationManager. We do this with a simple getSystemService call. After this we just tell the locationManager that we wish to receive continous updates (the 0,0 parameters), from the hardware GPS and pass it a reference to our LocationListener.


public void onCreate(Bundle icicle) {
super.onCreate(icicle);

// create a map view
mapView = new MapView(this);
mapController = mapView.getController();
mapController.zoomTo(22);
setContentView(mapView);

// get a hangle on the location manager
locationManager =
(LocationManager) getSystemService(Context.LOCATION_SERVICE);

locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0,
new LocationUpdateHandler());

I hope some of you can find this useful. It took me several hours to figure this stuff out and get it working in my own application, so perhaps this tutorial will be a nice time saver for some of you out there, that way you can get back to your eggnog and Xmas toys.

The complete source for this tutorial is available here :
http://code.google.com/p/trivial-gps/
or on github if you prefer:
http://github.com/jasonhudgins/TrivialGPS/tree/master

Tuesday, October 30, 2007

A Google Translate API?

I got all excited today when my favorite podcasters, the Java Posse announced in episode #148 that Google had released an API for their translate service.

I've had my own implementation going on for several months now, and I was all ready to post that I was going to discontinue my Google Translate Scraper. Who wants to use a scraper when you've got a real robust interface, right?

Then I looked at the code, and there is no way this is an official Google API, because it's nothing more than a two class scraper that's far more immature than my own implementation.

A couple of key differences between this one and mine. MIne has some basic test coverage, this one doesn't. My implementation will throw a TranslationException if you try to send more data than Google will process (you can't just send them an arbitrary amount of text). My implementation includes a real XML parser so if Google radically changes their HTML structure I can can still parse it cleanly.

So for now I'll keep my project going, hopefully one day Google really will release an API for their translate service.

Thursday, October 11, 2007

Return of the Coder

It feels good to get some coding done, even if it isn't a whole lot. I've had a long hiatus for the last two months and now I'm trying to pick things up where I left off. The reason for my absence?

Maxim Alexander Hudgins

Maxim Alexander Hudgins (7 lbs 10 oz) was born in Aug 20th, he's my first and I'm quite proud. In fact I hear him cooing in the background as I write this. I know what your thinking, but no, we did not name him after a men's magazine. His name is a very old Russian name and it's pronounced "mak seam".

Plenty of photos here : www.flickr.com/photos/jasonhudgins/tags/maxim/

Changing the subject, almost every java developer that I know uses Eclipse as their primary IDE. I listen to the javaposse podcast pretty religiously, and they have always said good things about NetBeans. So I decided to take a serious look at it.

The big thing for me is support for maven, I love it and I can't imagine not using it. With eclipse you have two options, you can either use the internal maven plugin (mvn eclipse:* commands), to generate the project files for eclipse or you can use the maven eclipse plugin. Bad things tend to happen if you try to use both at the same time. Of the two I prefer the maven internal plugin, I've only experienced grief with the eclipse plugin.

Getting maven to integrate cleanly with WTP in eclipse for developing web apps was a painful experience for me. For one thing, you can't use the newest version of eclipse, europa, because the maven internal plugin doesn't yet support WTP 2.0 yet Even using 3.2.2 it seems like I was always having to much around .classpath and try to get things to work.

There is also a maven plugin for NetBeans, it's called mavenide. It also comes from CodeHaus, the makers of the eclipse plugin. After installing the plugin and opening one of my maven projects, I was immediately aware that mavenide has a very tight integration with the NetBeans platform. It's features are very well organized, and I have yet to do any mucking with config files, everything is just working, and I'm very happy. The webapp support is really great. NetBeans 6 is looking good too, but the maven support wasn't quite up to par, so I'm still using 5.5 at the moment. I'm ready to say goodbye to eclipse for the time being, but I expect both apps to continue to improve so swearing fealty to either would be short sighted.

And the final bit of news, I've released version 0.9.8 of the GoogleTranslateScraper, it's available on my software page. I've cleaned up the unit tests and put a cap on the amount of text you can submit for a translation job (30000 characters max).

Somewhere around this point google stops translating. My first idea was to just split up the input into smaller chunks and submit multiple jobs concurrently. Sounds easy, but you can't just arbitrarily split the data, you have to try and do it on a sensible boundary, like the end of a sentence, otherwise your translation won't work very smoothly across the transitions. A character followed by some punctuation would do it for English, but I don't know how to do it with non-western languages, Chinese, etc. So I took the easy way out and just punted the problem up to the next layer in the application. So if your using my library and want to translate large globs of text, then it's your job to split the data into chunks in whatever way you feel is appropriate.

Wednesday, July 25, 2007

Google Translator Restlet published

You can find it here, including a complete tutorial for consuming the service using the restlet framework. Even if your language of choice is not java, it shouldn't be very difficult for you to write your own consumer using the the documentation I've provided.

In brief, it's an asynchronous RESTful service that allows you to create a translation job by posting to a specific URL, and if the job is accepted it returns you a 201 Created status and a Location header telling you where you can pick up the results when processing has completed. Then it's just a simple task of polling the result URL (a job resource). The job resouce will return an HTTP 102 Processing status until it completes, at which point you'll then get a 200 Okay, and your results in UTF-8 encoded text/plain.

I posted about my service on the restlet list several days ago, in an effort to get some feedback on what I might have done better. The biggest point of contention has been the jobs resource url that you post to when you create a job. I chose the first option following the guidelines in the RWS book.

  • /jobs/en,fr
  • /jobs/en/fr
The idea here is that if you have have two pieces of scoping information that are ordered but not hierarchical (the from and to languages respectively), then you should seperate them with commas. Several people, however, have presented a good case for looking at it at as a hierarchical set of folders. I haven't completely made up my mind yet, so comments are welcome.

The other minor point of contention is my choice to use a 102 status to indicate that a job is still being processed. At least one person felt that a 202 Accepted would be more appropriate. Personally I think that a 102 Processing status is a clearer way to express the resource state, if your not opposed to using the WEBDAV extended status codes.

Beyond stress testing this service, I don't intend to become a major service provider so I'll publish the source code to my translate restlet backend eventually. The last two restlet framework releases, 1.0.2 and 1.0.3, both have problems with the servlet adapter. This piece of the framework allows you to run a restlet within any web container. Because of this I've had to code in some workarounds that I'd rather not publish. Hopefully these issues will be cleared up in the 1.0.4 release.

Tuesday, July 17, 2007

Google Translate Scraper 0.9.6 released

Available for download from my software page and from my maven 2 repository.

This is a minor release that improves the exception handling and a few other tweaks. Probably the biggest change is the name, I felt that Google Translate Engine was not a good description. This new title, while a bit less glamorous sounding, is more accurate.

Very soon I'll publish a RESTful service that's build on top of the scraper. Hopefully in the next day or so.

In other news, I recently switched from jboss-4.0.2 to the Glassfish Application Server. So far I'm really pleased with glassfish. I've been running it under a 1.6 JVM with no problems. The web based administration console is really slick.

Friday, July 13, 2007

RESTful Web Services

As promised, here's my quick book review for RESTtful Web Services by Leonard Richardson and Sam Ruby.

This book has a wealth of conceptual information for what the authors have dubbed "Resource Oriented Architecture". They identify three different categories of web services, "Big" web services (those that use WSDL and SOAP stacks), REST-RPC hybrid services, and strictly RESTful web services. They describe the pro's and cons of each, and present a good case for RESTful services.

I set out to create my own RESTful service, and this book really helped me to clarify and design it correctly. For java developers, unless you know ruby or have a strong desire to learn it, feel free to skip Chapter 7, where the authors implement a RESTful service using RoR. The implentation details of this book aren't what makes it great, it's the concepts it describes, so don't get bogged down in the details if you don't need to. In fact the restlet.org website has restlet implementations of all the examples in the book. That's going to help a java guy much more than the ruby examples. The section on restlets is brief but useful for getting started, I read this section at least three times.

The restlet framework has been a joy to work with so far, and I'll probably be releasing the source to my restlet implementation very soon.

If your interested in learning REST, and want to understand it in more developer-friendly language than Fielding's now famous dissertation, then I highly recommend reading this book.

Tuesday, July 10, 2007

Dumb IT Hiring Managers

I've had a big pet peeve about IT hiring practices for several years now. I'll be perusing the IT job listings on a board, and typically I always find at least one position with an absurd set of requirements.

Today, I received an email from an unknown IT recruiter about a java position in Plano. One of their core requirements for the position was 9/10 years of J2EE experience.

The J2EE 1.0 specification and reference implementation was released sometime in 1999, and as far as I know, the first full implementations weren't shipping from vendors until 2000. Wikipedia has a J2EE timeline here. If you started working with J2EE the very moment the 1.0 specification was released, you could at most have about 8 years experience.

I also think that requiring X number of years with a particular technology is a dumb way to look for competent people. I've seen plenty of developers who've been doing java for three or four years and still haven't figured out that the Collection interface has a built in iterator.

What hiring managers should be looking for is scope and breadth of experience. How many technologies have you worked with, and how many years have you been a developer overall? Do you write code on your own time, or do you just reluctantly churn out enough lines to keep your manager from frowning at you? I wouldn't ever hire someone who doesn't enjoy programming, but then again, nobody's asking me.

Wednesday, June 27, 2007

GoogleTranslateEngine 0.9.3 released

GoogleTranslateEngine 0.9.3 can be obtained from my software page. This is a minor release that in no way alters the functionality of the library. Google is apparently tweaking their translation engine for better accuracy, and their tweaks broke a few of my unit tests. I also fixed a null pointer exception that I was getting on occasion.

I've been focusing on studying REST design for over a month now with brief excursions into ruby on rails. I'll soon compose a review of the excellent book RESTful Web Services, by Leonard Richardson and Sam Ruby, well, just as soon as I finish reading it.

I initially attempted to craft my own restlet interface for the GoogleTranslateEngine, however I quickly realized that I didn't understand HTTP and RESTful design as well as I thought I did. This book has gone a long way towards correcting that.

Sunday, May 6, 2007

Translate Webservice 100% functional

My mission to complete and deploy a SOAP webservice for my GoogleTranslateEngine is finally complete. I had it partially working a month ago, but for several weeks I've been trying to understand why the jBossWS stack insisted on sending me garbage for non-latin encodings.

My friend Dustin had insisted that I write some unit tests, and that ended up being a very good thing. I have been doing all my development with eclipse on a windows XP machine, and the unit tests were all passing perfectly. The webservice is running under jBoss on a fedora core 6 box. Because I'm using maven as my build system, it's trivial to do builds/tests on any other system, but I never though to run my unit tests on my linux box, I simply assumed they would work like they did on windows.

When I finally got around to doing it, one of my unit tests was failing. As it turns out I wasn't setting my encoding parameters properly when passing my InputStream into the tagSoup parser. For some reason that I still don't understand, this was working just fine on windows, but not at all on linux.

So, the point of this story is, don't be shy with unit tests. They can be very helpful!

I just released version 0.9.2 of the Google Translate Engine which can be found on my software page. This, of course, includes the bug fix that I mentioned.

The glorious SOAP webservice that I've been working on is published here. I'm keeping it open for now, unless a deluge of people start using it, but I don't think I have much to worry about.

So my next project will to be a restlet implementation. This looks like the way of the future, so until next time...

Monday, April 30, 2007

webservice pain

I wish I could proudly announce that I have a slick new webservice for my Google Translator ready for public consumption. It's been almost a month since I started working on the thing in my free time. This should have been a relatively straight-forward project to implement, but alas, it's been hell every step of the way. SOAP is hard and ugly, and plenty of people warned me about it before I even started. "Use REST!!", they proclaimed, and now I can see why.

I started off on my journey with the intention of making a nice POJO backed JSR-109 style webservice. The SEI was fairly trivial, and it works much like the RMI version that I did earlier. Once you have your interface done, there isn't any more coding to do, it's all xml wiring, mapping, and magic. Which is far nastier than it should be.

My first mistake was trying to attack the problem in a generic way. Before you get started on a webservice, know what your target platform is and read the container specific documentation/examples. My target platform is jBoss 4.0.5. I would have saved myself a great deal of time and pain if I had started off on with the jBossWS examples.

The reason your platform choice is so important is because of the tooling. Your platform will most likely provide some set of tools for generating all the nasty XML files that JSR-109 demands (jax-rpc mapping, WSDL, etc). jBossWS comes with a little tool called wstool which takes a pretty simple xml config file and then spits out all the ugly XML files that you need for your webservice. Having to generate these files by hand would give you nightmares, so good tooling is essential. I also had to download and use the now out of date JWSDP-2.0 for several things, including wscompile (more tooling).

Once all that was done, everything worked, well almost. As long as no non-latin character encodings were involved, everything was just peachy. If, however, I want to translate English to Russian, and get UTF-8 Cyrillic returned to me, things don't work so well. Somewhere inside the jBossWS code, it's mangling up my result string, and that's going to take some poking around. Hopefully I'll figure everything out and get the thing to behave correctly.

So, maybe next month?

Wednesday, April 11, 2007

The joys of FIOS

So there I was, all moved into the new house and excited about finally getting verizon fios data services and fiosTV. I would love to tell you about the gross incompetence i had to deal with while using the verizon avenue DSL service, but that's a whole other story. Back to fios, for the average joe, all you have to do is call up their residential services, place an order, wait for the tech to come out, and your set.

Myself, and probably anyone who reads this blog, wouldn't fall in to the average joe category, and that's where the pain begins. I like serving web accessible stuff from servers at my home office. It's cheaper than a using colo, and if something critical goes boom, I'm not driving 30 miles to my colo in my pajamas. I like to be close to my hardware.

After placing my residential order for everything, I discovered that the residential fios data service, blocks all inbound traffic on ports 25 and 80. Of course, I should have researched this before I placed my order, but I like living dangerously, or it could be that I'm just lazy about that kind of thing.

Verizon, with an abundance of marketing genius, has determined that if you need inbound web traffic, then you must be a business, and should be forced to get a static IP, even though you might not need one. I guess they have never heard of dynamic DNS. You have to order it through their business services, which is a totally separate department.

Annoying, but easily remedied by a call to their office...

So they canceled my residential order, and after 30 minutes we've got my new business service order in place, the tech will come out on April 9th. I'm all happy, right up until the part where I say, "Great! l'd also like to setup fiosTV now too." Lo and behold, because I use the business service with a static IP, they refuse to sell me fiosTV. I demand to know why, and it's not because of any technical limitation. The reason is that the marketing wizards have determined that if your a business, then you probably have a lot of employees, and you shouldn't be able to provide cheap fiosTV to them, instead you have to get some business directTV package which probably has an astronomical price tag. I guess verizon has never heard of a home office either.

So here I am a couple weeks later, I have nice speedy fios data service, but I'm reduced to using rabbit ears on all my TV's until I figure out what kind of cable/satellite setup will suit my needs.

Fios is a good deal faster than DSL, no complaints in the speed department. The router they give you is pretty damn nice too. However, I still haven't been able to setup my outbound email, because it requires some kind of user/password that they failed to mention to me at any point during this entire process.

Well, at least I can get back to coding, who needs TV.

Friday, March 23, 2007

Google Translate Engine 0.8.0 released

For some of my upcoming projects, I was hoping to find some publicly available translation web service. I managed to find one, but it seemed to be under fairly heavy load and unavailable at times.

Google has a very nice, simple web interface for performing text translations. Unfortunately they don't offer an API for it. Since I couldn't find a suitable web service, it made sense to do my own implementation by scraping google's site. Besides, I could always use more experience with web services.

Before I could do any of that, I had to write the core library. It's not a web service, but it's completely usable in a java application. I haven't had an opportunity to test it out very much, but I decided to share it anyway. You can find the Google Translate Engine v 0.8, here on my software page. It appears to work fine, but I've had a real hard time getting my workstation to properly output unicode on the console, so it definitly requires more testing.

I played with lots of new stuff while making this: maven, javadoc, SAX, unit testing to name a few. I'm using John Cowan's nice tagSoup library to web scrape. It allows me to use a SAX handler even on badly formed html.

Now I'm in the process of trying to figure out how to best implement an old school JSR-109 servlet driven JAX-RPC web service. My target platform at the moment is Jboss. As a pre-requisite exercise I'll probably make a tiny RMI service that hooks into the translate engine. Hopefully I can get that done in the next couple of days.

Monday, March 5, 2007

i18n'ing it up!

I plan on working on some multi-lingual websites, so what better encoding to use, than UTF8, right? You get dozens of languages and character sets all supported by a single encoding, like on this site.

So what's it take to get all this set up? Let's start with the most obvious thing you can do, like sticking this into your html header.

<meta equiv="Content-Type" content="text/html; charset=utf-8">
That's not nearly enough however, because most browsers will also check your http header, and if it doesn't agree with your meta tag, then the http header value takes precedence. This is fairly easy to correct. In JSP I can do it with a page directive :

<%@ page contentType="text/html; charset=utf-8" %>
What about sending/recieving form data?

Most browsers will send in the same encoding that the page is in, but for an added guarantee you can specify an encoding type in your form tag :

<form charset="utf-8" method="post" action="postArticle.do">
...
</form>
Things can get a little more tricky on the receiving end however. A servlet will check the character encoding, with request.getCharacterEncoding(), and if it's null, you won't get what your expecting. I'm not enough of a container expert to understand what's going on behind the scenes, but in my case it was necessary to tweak things to tell java to use utf8 encoding. I did this with a simple modification to my ActionForm bean.

public void setContent(String content) throws Exception {
this.content = new String(content.getBytes("8859_1"),"UTF8");
}
Granted, there are probably much better ways to do this (request filters, etc) and I'd be more than willing to listen to anyone else's expertise on the subject.

Friday, March 2, 2007

Good books, bad books

I thought I'd share my thoughts on 4 books I've read lately.

Head First HTML with CSS & XHTML - I wanted to teach my wife XHTML/CSS and this book was fantastic. Thorough and well organized, and funny. It's as about as interesting as you could make the subject. I highly recommend it.

PHP/MySQL Programming for the Absolute Beginner - Now I'm trying to teach my wife PHP development. From the title of this book, and several decent reviews, it seemed like a good choice. Boy was I wrong! I think most of the reviews written at amazon.com were by amateur coders who don't know any better. The author makes niave assumptions about your development environment, writes bad html, and doesn't use best practices. Some of the examples are so nasty that they made my head hurt trying to explain them to my wife. Avoid this book!

PHP and MySQL for Dynamic Web Sites: Visual QuickPro Guide - So back to amazon I went and this time I did a lot more research. This book is very highly recommended and quite a few of the book reviewers know what they are talking about. We've just started into it, but the examples use proper XHTML, are fairly concise and well organized. He doesn't just assume that your environment has auto register globals enabled (yuck) and covers important topics like magic quotes early on, that tend to confuse people.

Head First Servlets and JSP - I just finished this book myself and I'm quite pleased by it. It's narrative is along the same style as the other books in the series. It's funny (sometimes cynical) and informative. The Kung-fu movie captions are awesome. This is one of the few IT books that I've ever read in it's entirety. I highly recommend!

I've just ordered Maven: A Developer's Notebook. That will be my next book to tackle. Now that I'm doing J2EE development, I'm looking for tools that will take the sting out of packaging and deployment. Maven seems to handle a lot of those issues for you. I'm looking foward to learning more about it.

Wednesday, February 28, 2007

Image-TestJPG-1.0 released to CPAN

Today I released a new version of Image-TestJPG

I fixed some build errors that prevented it from building on the newer incarnations of gcc. Updated some bad documentation for it's usage, and that kind of stuff.

I've tested it, but only very lightly.

Truth be told , I don't use enough perl anymore to really care how well this works so long as it works. Coding in xs, the language which binds perl to C is painful. The perlxstut is out of date, so instead of recreating a whole new module structure with h2xs I just fixed the existing one, which is 10 years old now. But hey, it works! I haven't practiced much with C since college. I just don't work on application tiers where C is needed very often.

Oh yeah, what's this thing do? It's very simple really, it tells you if a jpg is corrupted or not.

Tuesday, February 27, 2007

Here we go...

Being the masochist I am, when I finally got around to startup up a blog, I decided to host and install the blogger system myself. I have a basic jboss server up and running. All I had to do is chose which java blogging app to use, preferably the one that causes me the least amount of headache.

I looked at three different java based, blogging webapps tonight :

I ultimately chose Pebble, because it's the lightest weight, and the least painful to setup. I had to nuke the log4j.jar from the pebble jar, as it didn't get along to well with jboss, so the install wasn't completely painless. Of the three, however, pebble was definitly the easiest. Not requiring a database is nice, this isn't a high-traffic, multi-user blog. So why worry with the overhead?

Pebble is working well so far, it will be interesting to see how far along I can get with it. I wonder how easily I could extend it. I'm also a notriously bad speller, and as I'm entering this, I don't see any type of spell check, that's something I could add. Of course it might be in here somewhere, I've only been using pebble for about 10 minutes now.