Monday, October 27, 2008

ALUI Publisher - Part 3: No Redirect Bug Fix of Bug Fix :)

I have been relatively lazy in regards of my blog lately, and I have plenty of articles that are stacking up...But in the meantime, I thought this one is pretty urgent.

In one of my previous article, ALUI Publisher - Part 2: Increase Performance by enabling REAL Caching - No Redirect Bug Fix, I was explaining how to fix the Publisher published_content_noredirect.jsp (see  in that serie the benefits of using this instead of published_content_redirect.jsp).

Well, I found out a small little bug on my part, and it is definitely worth fixing if you have not done so already. In my corrective code, I was trimming the content string from its end spaces (just to optimized the HTML output)...and I did not think of the likely negative effects, such as if the buffered content does finish on a meaningful space character (such as a sentence separator etc...)

So the updated code with bug fix is: (specifically the trim that is removed)

//if there is content, forward to the requesting client
int buffersize = 2000;
int charread = 0;
char[] content;

//read until no byte is found in the input stream because request content length is no reliable
// UTF-8 is necessary
BufferedReader bisr = new BufferedReader(new InputStreamReader(conn.getInputStream(), "UTF-8"));

do {
content = new char[buffersize];
charread = bisr.read(content);
out.write(new String(content).trim());
} while(charread > -1);

bisr.close();
bisr = null;
content = null;


Better to find it late than never! :)

The code should be updated on the famous (yeah right...) ALUI Toolbox Google project (or should be soon)

So Long.

Sunday, September 28, 2008

ALUI Enhancement: Get/Set All Preferences from Javascript

As you probably already know, ALUI Portal personalization features are mostly guarantied by the possibility of setting/getting "preferences" with various scopes. Not to myself extend too much on this concept, you can refer to the ALUI Development Documentation (Link HERE) to have further explanations. While you read this documentation, you'll learn that preferences can be set and retrieved through:

  • the IDK API (most commonly),
  • the server API (much less common since Server API should mostly be used only if the IDK cannot fulfill your needs), or
  • the Javascript API (but only for preferences of scope "session")

What is usually done consists in building a JAVA or Dotnet application that uses the IDK API to get/set preferences within a portlet, allowing for the presentation or behavior of the portlet to change based on the preference chosen.

But as you build ALUI intranet or even Internet websites, you often realize that most of the portlets are either content-specific with some kind of personalization (i.e. a link showing only if this preference has been set for that user...), or have a pretty simple behavior and presentation.

Would not that be cool to be able to use "Publisher" for example to build those simple portlets? In most cases, with the power of the ALUI Portal Adaptive Tags coupled with the Publisher PCS Tags (this could apply to any other Web Content Management Systems, but why not use Publisher as an example since it is still alive :) ), you could build such simple portlets without the need for programming a portlet in JAVA or Dotnet...

But what about preferences? How could you set and get preferences, since they are available only through an API that can be used only using those programming languages? Well, ALUI has already thought it through, and integrated within their Javascript API the possibility to set/get "Session" preferences. That was great thinking indeed. But I kept (and still keep) asking myself: Why having developed this idea only half way through?? Why allowing to only set/get "Session" preferences through Javascript? All I know is that there is no reason why not...and I just wanted to share this essential feature.

Javascript Session preference: How does this work?

So first, let's understand how the Javascript preference access works. Basically an activity space has been developed and its code is found in the following package: com.plumtree.portalpages.browsing.sessionprefs. As the portal follows a strict MVC (Model View Controller) architecture, an activity space is mostly composed of a "Control" class (implementing IControl interface), a "Model" Class (implementing IModel interface) and a "Presentation" class (implenting IDisplayPage interface). You can refer to the "ALUI Portal Customization Package" to see the code of this sessionPrefs activity space. To request the activity space functionnality, simply create a HTTP request with the right parameters, like this:

http://your.host.domain.com/portal/server.pt?space=SessionPrefs&control=SessionPrefs&action=getprefs&_preferencename=

http://your.host.domain.com/portal/server.pt?space=SessionPrefs&control=SessionPrefs&action=setprefs&_preferencename=preferencervalue

From there, all there is to do is to allow that request to be called from Javascript, through an AJAX HTTPWebRequest...The javascript methods getSession() and SetSession() which initiate that AJAX HTTPWebRequest will use the PTPortalContext javascript object defined on every page to get the right ActivitySpace url as follows:

PTPortalContext.GET_SESSION_PREFS_URL = 'http://localhost/portal/server.pt?space=SessionPrefs&control=SessionPrefs&action=getprefs';
PTPortalContext.SET_SESSION_PREFS_URL = 'http://localhost/portal/server.pt?space=SessionPrefs&control=SessionPrefs&action=setprefs';

...and that's it...Plumtree/BEA delivered your ALUI Javascript-Enabled "Session" Preference behavior.

Javascript preference of "All Type": How does this work?

All I did was extending on this principle in order to make all type of preferences available through Javascript, and as you understood now, through the behind-the-scene Preference Activity Space. Since I post the code on "ALUI Toolbox Google Project", I will pass on the very details of the code...

The main outline is:

  • Create a new Activity Space called "PortalPrefs" instead of SessionPrefs
  • Update the PortalPrefControl class to take into account a new parameter "type" which allow user to specify the type of preference you want to get/set (possible values are: portlet, admin, session, user, community, communityportlet).
  • Update the PortalPrefsModel class with the corresponding Preference Getter/Setter methods for each preference type (or scope)

Now, after installing this new code, the urls to access the AS are (differences in bold)

http://your.host.domain.com/portal/server.pt?space=PortalPrefs&control=PortalPrefs&action=getprefs&type=PreferenceType&_preferencename=

http://your.host.domain.com/portal/server.pt?space=PortalPrefs&control=PortalPrefs&action=setprefs&type=PreferenceType&_preferencename=preferencervalue

Then you have the choice as far as Javascript is concerned: either you create your own layer that performs the call to this new activity space, or you plug this new bahavior into the already existing Javascript framework, without any modification to the framework itself.

I personally chose the later, since I always try not to modify the portal existing libraries (just to allow for easier future upgrades of those libraries). All you have to do is to make sure to override on your portal page the PTPortalContext URL properties with the new AS URL end point BEFORE you call the already existing GetSession()/SetSession() javascript methods. To do that, you could simply create the following Facade JS methods to be called from your portlets:

<Script>

GetPortalPreference(prefType, prefname){

PTPortalContext.GET_SESSION_PREFS_URL = 'http://your.host.domain.com/portal/server.pt?space=PortalPrefs&control=PortalPrefs&action=getprefs&type=' + prefType;

GetSession(prefname); (this is the call to the already existing JS framework for session preference)

}

SetPortalPreference(prefType, prefname, prefvalue){

PTPortalContext.SET_SESSION_PREFS_URL = 'http://your.host.domain.com/portal/server.pt?space=PortalPrefs&control=PortalPrefs&action=setprefs&type=' + prefType;

SetSession(prefname); (this is the call to the already existing JS framework for session preference)

}

</script>

That's it! Feel free to browse the code (only JAVA version for now) at the Google Code Project I created (I also created for you the JAR that contains this code...that way, you can easily install and test this on your portal) and please let me know your thoughts! Hopefully, this will be integrated in the future releases of ALUI.

Monday, August 4, 2008

ALUI Best Practice - Don't Hardcode ObjectIDs: Use UUIDs Instead

In ALUI, there are 2 ways to identify uniquely an object:

  • ClassID (the type of object) + ObjectID (The ID of the object within the classID family)
  • Global UUID (unique ID throughout the environment)

The ClassID/ObjectID combination is used throughout the Portal API to query/open/manage the ALUI objects, as well as navigate to communities and pages. The ALUI ADAPTIVE TAGS are no exception: you will notice that they require ObjectID / ClassID to perform their tasks, like for example the opener tag:

<pt:standard.openerlink pt:objectid="219" pt:classid="514" pt:mode="2" target="myWindow">view community page</pt:standard.openerlink>

The main problem is not in the fact the ObjectID / ClassID is a bad way to identify an object, but in the fact that the ObjectID WILL NOT NECESSARILY (very improbable actually) be the same when you migrate objects from one environment to another. And that's where it can hurt...

Indeed good practice is to test out your creation in DEV / TEST / STAGING etc... and then migrate it using the ALUI migration tool. Since objectIDs will be different after migration in the new environment, all the objectIDs used in Adaptive Tags will have to be changed...hassle indeed.

Fortunately, UUID does NOT change with migration (or very improbable) from environments to environments. So that would seem a good option if you need to hardcode IDs, especially in Adaptive Tags, and I'd recommend that option everywhere you can.

In order to make it possible, I built a Adaptive Tags that does just this: Transform a UUID into its corresponding ClassID/ObjectID pair. All you need to do is use that Adaptive Tag before using the opener link tag for example. The object and class IDs are stored in shared memory (adaptive tag framework) with the specified scope and can be reused with any other tags that require objectID / ClassID.

<pt:taglibname.convertuuidtoid pt:uuid="{UUID}" pt:objectid="objectIDKey1" pt:classid="classIDKey1" pt:scope="portlet request" />

<pt:standard.openerlink pt:objectid="$objectIDKey1" pt:classid="$classIDKey1" pt:mode="2" target="myWindow">view community page</pt:standard.openerlink>

What's even better is that you can use this tag even for your remote portlet applications, like any other tags.

The code to change a UUID to the ObjectID / ClassID pair is fairly simple. Just get the Migration manager and call the convert method UUIDToObjectID to get the job done. I created a helper method that package it altogether:

public Object[] getClassObjectID(String uuid){
        Object[] oClassIDObjectID = null;
        if (null != uuid) {
            try {
                IPTMigrationManager oPTMigrationMgr = (IPTMigrationManager) oPTSession
                        .OpenGlobalObject(
                                PT_GLOBALOBJECTS.PT_GLOBAL_MIGRATION_MANAGER,
                                false);
                oClassIDObjectID = oPTMigrationMgr.UUIDToObjectID(uuid);
            } catch (Exception exc) {
                oClassIDObjectID = null;
            }
        }
        return oClassIDObjectID;
    }

The first item in the returned array is the ClassID (oClassIDObjectID[0]), the second item is the object ID (oClassIDObjectID[1]).

You can download the code on my newly created subversion project (http://alui-toolbox.googlecode.com) on Google code (I'll be updating/maintaining this as I see fit - feel free to suggest at will :)):

svn checkout http://alui-toolbox.googlecode.com/svn/trunk/ alui-toolbox-read-only

No more mocking around with IDs during migration :)

Hope that helps!

Wednesday, July 23, 2008

Inside ALUI Grid Search: Redundancy Bug (6.1 on window at least)

With ALUI 6.1, BEA introduced a completely revamped search component for ALUI, allowing for better redundancy and better throughput: Grid Search. The main advantages of that new search component are:

  • Multiple search nodes to provide redundancy for serving search requests.
  • Search index can be split in multiple partitions, each attached to various search nodes., to increase throughput.

Every nodes on the same partition automatically replicate locally their search index to guaranty redundancy and performance.

Although there is capability for multiple nodes that guaranty redundancy, all the nodes need to access a central "cluster" data repository located somewhere on the network (through file share). It is located by default at <ALUI_HOME>/ptsearchserver/6.1/cluster. What is usually done is to share that folder (simple network share if you are on windows) and set up the other nodes to access that share as their cluster repository. This cluster repository holds the cluster information (nodes and partitions info) and the multiple search checkpoints that allow for search index backup.

One main problem that I personally experienced with that design consists in the fact that this cluster repository represent a single point of failure... if the cluster share is suddenly not available (hard disk, server, or network failure), all the nodes are not able to talk to the cluster and there might be problems happening.

And actually, a huge problem occurs in that case: if the cluster share is not available, all the nodes are suddenly experiencing an "Out Of Memory" exception and shutting down abruptly. Thus, although you deployed multiple nodes and partitions, if the cluster share is down, your search architecture is...down.

It is pretty easy to test (at least I successfully reproduced the bug on ALUI 6.1 MP1 Patch 1 on windows server 2003): have your nodes all running, and simply remove the share from your cluster folder...all your nodes will go down (apart from the one that accesses the share locally if the cluster share is installed on the same server as one of the nodes)

2 options from there:

  • make sure the share is never down (windows clustering, redundant NAS cluster, or polyserve technologies)
  • install that critical fix from BEA that fixes this bug

If you don't have an infrastructure that provides the first expensive option, you might want to look seriously into the 2nd one...and contact your sales rep asap. Basically, the critical fix allow for the nodes to continue serving requests even if the cluster share is no longer available. All the nodes switch automatically in read only mode without the "out of memory" exception that was occurring before.

Although it is much better, some problems are still present with that critical fix. When in read-only mode, the nodes are no longer indexing new content...your search index is then blocked at the point in time when the cluster share did actually go down, and any new object or document will not show up in the search as long as the cluster share is not restored. The second problem is that the nodes will NOT automatically roll back to read/write mode whenever the cluster is available again. It will require a manual restart.

But compared to a total shut down of search, these problems seem less important indeed!

I am not 100% sure this fix has been pushed to ALUI 6.5 but I sure hope so. And by fix, I am talking about a total fix including auto rollback to "normal" mode when share is available anew, or even allowing for TOTAL continuity of service when this share goes down...

Please let me know (leave comment) if you have that information on 6.5, or if you reproduce this with other versions of the portal.

Tuesday, July 15, 2008

ALUI Tool: URL (or text) Migration within Publisher Items

Following my previous article "ALUI Administration Tool for Environment Refresh: String Replacing for URLS" talking about migration between environments, here is an extra piece that you might find very useful (I surely use it all the time)

Basically, as explained in the previous article, it is common to have different DNS aliases set up per environment...I.e for publisher remote server, you could have:

Similarly, the publish browsing URL is not an exception to this rule:

  • http://publisher-content.domain.com/publish  for production
  • http://publisher-content-stg.domain.com/publish for staging
  • http://publisher-content-dev.domain.com/publish for development
  • When you add an image or a link in the free text editor of content items in publisher, it will most of the time create an absolute URL to that resource...thus you can imagine that there will be a lot of DNS aliases within a lot of publisher items throughout the environment.

    What happen when you migrate the publisher DB from one environment to another? Well you will have a lot of DEV dns aliases within your Staging environment (in case of a DEV promotion to Stage); or a lot of production DNS aliases within your dev environment in the case of a production refresh to DEV.

    In my previous article "ALUI Administration Tool for Environment Refresh: String Replacing for URLS", I was mostly talking about migrating URL within portal objects, but nothing really about migrating urls within publisher items.

    Thus, I created some DB scripts (SQL Server only for now) that do just that...

    1. puburls-PTCSDIRECTORY-nvarchar-replace.sql: Script to change a particular string within the PUBLISHEDTRANSFERURL and PUBLISHEDURL columns (which is mapped out in the DB to a column of type VARCHAR)
    2. puburls-PTCSVALUE-ntext-replace.sql and puburls-PTCSVALUEA-ntext-replace.sql: Scripts to change a particular string within the "long text" property of a publisher item (which is mapped out in the DB to a column of type TEXT)
      1. PCSVALUES.LONGVALUE (hosting the long text of the currently published item)
      2. PCSVALUESA.LONGVALUE (hosting the long text values of all the previous versions of the item)

    For the first script, PUBLISHEDTRANSFERURL and PUBLISHEDURL columns are of type VARCHAR and thus it is easy to replace a string within those columns using the REPLACE MS SQL Function. Thus, a simple SQL statement is good here.

    The main challenge was really with the 2nd scripts...indeed, within a column of type TEXT, the SQL "REPLACE" function cannot be used...The workaround is to use the PATINDEX and UPDATETEXT functions within a Transact-SQL (T-SQL) script. To give the credit to to the right person, I adapted a script that I found at ASP FAQ - How do I handle REPLACE() within an NTEXT column in SQL Server?

    DISCLAIMER: ALTHOUGH I PERSONNALY USE THIS SCRIPT ALL THE TIME, THERE IS NO GUARANTY; SO USE THIS TOOL AT YOUR OWN RISK blah blah blah AND USE IT ONLY IF YOU ARE PROFFICIENT ENOUGH WITH ALUI PORTAL TECHNOLOGIES.

    Attached is the zip file package that contains the 3 scripts:

    Don't forget to change the string to look for, and the string to replace it with

    puburls-PTCSDIRECTORY-nvarchar-replace.sql

    UPDATE [dbo].[PCSDIRECTORY]
    SET
    [PUBLISHEDTRANSFERURL]=REPLACE([PUBLISHEDTRANSFERURL],'-DEV.DOMAIN.COM','-TST.DOMAIN.COM'),
    [PUBLISHEDURL]=REPLACE([PUBLISHEDURL],'-DEV.DOMAIN.COM','-TST.DOMAIN.COM')
    WHERE
    publishedtransferurl like '%-DEV.DOMAIN.COM%'
    or publishedurl like '%-DEV.DOMAIN.COM%'



    puburls-PTCSVALUE-ntext-replace.sql and puburls-PTCSVALUEA-ntext-replace.sql



    SET @oldString = N'por-pubcontent-dev.domain.com'; -- remove N 
    SET @newString = N'por-pubcontent-tst.domain.com'; -- remove N


    That's it! Let me know if you find it as useful as I do! Enjoy!!

    Monday, June 30, 2008

    ALUI Publisher - Part 2: Increase Performance by enabling REAL Caching - No Redirect Bug Fix

    Previously (http://fsanglier.blogspot.com/2008/06/alui-publisher-part-2-increase.html) , I've been talking (and proving I hope) that using a "no redirect" mechanism for serving published content from publisher is the best option to enable portal caching. Publisher 6.4 offers already such a possibility (although not publicized a lot): using in the portal Published Content Web Service object published_content_noredirect.jsp instead of the standard published_content_redirect.jsp.

    Unfortunately, if you start using this, you are going to start seeing a weird behavior: the publish content is getting truncated in some special cases...and this is due to the way the JSP has been coded. Several options for you: either you wait for a Critical fix to be issued to you by BEA (i am not aware of one yet), or you upgrade to ALUI 6.5 (I hear that this has been fixed in 6.5...have not verified though), or you simply do it yourself, as this is a simple fix to implement (ultimately, that might be the same type of code that would be issued by a CF I imagine)

    By looking at the JSP within the publisher web application archive (ptcs.war - explode the war using jar command), we can see what's wrong and why the content is truncated in some case:

    HttpURLConnection conn = (HttpURLConnection)url.openConnection();

    // make the request
    conn.connect();

    //read the content length
    int contentLength = conn.getContentLength();

    //if there is content, forward to the requesting client
    if( contentLength > 0 ){
    // UTF-8 is necessary
    InputStreamReader isr = new InputStreamReader(conn.getInputStream(), "UTF-8");
    char[] content = new char[contentLength];
    isr.read(content);
    isr.close();
    out.write(content);
    }


    As you can see, an HTTP GET request is made, and the content length of the response is gotten from the "getContentLength()" method. This call is going to get the content length number fro mthe response header rather than actually count all the bytes that are contained in the response content. Thus, since the code base itself on this number to output the content to the JSP output stream (see above: char array of length equal to contentlength), the content will indeed be truncated if the contentlength number is not correct...



    A simple correction (and more robust code) is actually to make sure ALL the content is pushed to the output stream, independently from the contentlength number returned by the response header. Here is my code below that fixes that issue, and also increase performance by using the preferred BufferedReader wrapper class instead of the bare InputStreamReader:




    ------EDITED 3/12/2009--------
    BufferedReader bisr = null;
    try {
    bisr = new BufferedReader(new InputStreamReader(conn.getInputStream(), "UTF-8"));
    String line;
    while ( (line = bisr.readLine( ) ) != null ) {
    out.println(line);
    }
    }
    catch(Exception exc){
    throw exc; //to be caught by the global try catch
    } finally {
    if(bisr != null)
    bisr.close();
    bisr = null;
    }
    return;
    ------END EDITED 3/12/2009--------

    Basically, the code will read by chunks of 2000 chars (this is a chunk size that I think is appropriate) the entirety of the content until the last character...and write it all to the output stream...This does not rely on the contentlength at all, and thus is more reliable and robust.



    After changing the published_content_noredirect.jsp as above, you can repackage the ptcs.war with the new corrected JSP (within the root of the previously extracted ptcs.war folder, run jar -cvf ptcs.war * command) and redeploy to ALL redirector and publisher instances...



    Voila, you have your perfect solution for ALUI 6.1 and Publisher 6.4 (and previous versions too).

    ALUI Publisher - Part 2: Increase Performance by enabling REAL Caching

    In my previous post http://fsanglier.blogspot.com/2008/02/alui-publisher-increase-performance.html(man, already couple of month ago...I know I've been sucked into a black hole since then :) ) I was talking about how to best design a scalable and redundant ALUI Publisher architecture. But what I had not pushed enough in the last article was performance.

    In my opinion, the Publisher standard behavior for serving content has some performance flaw (at least in ALUI 6.1 - Publisher 6.4) related to caching... Since there are ways around this (that's what we do, right?), I think you might benefit from this a lot, and in my last implementation, the change explained below increased performance under load to complete new levels (in the meantime reducing DB requests and DB and publisher redirector CPU utilization)...So here it goes...

    As I explained in previous post, each published content portlet within a portal page are going to make a request to the Publish Content Redirector component, and particularly the JSP page in charge of performing the redirect: published_content_redirect.jsp (you can see that defined in the Published Content portal web service object). As the name says, this Java Server Page (JSP) performs a 302 redirect to the published content item (as seen in previous post, this location should be served by your favorite web server, apache or IIS for example...). But BEFORE making this redirect, it must know where to redirect to... so for that, the code makes a DB request to the publisher Database in order to get the browsing path to the published content item (passing the publisher content item ID that was saved when you created your published content portlet earlier).

    Ok so here is the first flaw I was taking about in previous paragraph: To be able to see some content through the publisher portlets within the portal page, you can see from the above explanation that multiple calls to the publisher DB will be made. Let's say we have 5 publisher portlets on the page (not uncommon), for each page rendering for 1 user, 5 DB calls will be made to the publisher DB (in addition of multiple other DB calls for portal and analytics). If we are now talking about thousands of users, we are talking about too many DB calls to simply see some content that does not change often. This has an unnecessary impact on DB load of your infrastructure, and will increase the page load consequently since DB calls are inherently slower that simple content rendering...

    While reading, some of view are already thinking for a good reason: CACHING! Yeah, indeed, caching is the secret to scaling and performance (not always necessary to take out the big bucks and supercharge even more the DB infrastructure). And great for us, the portal offers a great out-of-the-box caching capability within the web service object: simply set the minimum caching to 2 hours, max to 20 days, and normally, you would think that the portal should simply cache the published content for that amount of time...removing the need to call the published_content_redirect.jsp altogether, and thus the need to make a DB call!! ...but it does not happen this way. You don't believe me? Enable access logging on publish content redirector components (un-comment "Access logger" section within <ALUI HOME>\ptcs\6.4\container\deploy\jbossweb-tomcat50.sar\server.xml) and you will clearly see that even though your page contains only published content portlets that should be cached, the published_content_redirect.jsp  is still constantly called...and thus caching is not really....caching.

    Why does this happen? It is because of the redirect mechanism for serving content...From www.w3.org, 302 is explained this way: "The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field." Thus, the portal is doing is job perfectly (it acts as a client here) and will never cache a response with a temporary redirect 302 status code.

    Ok so what can make it better? Changing the redirect mechanism to something that does not redirect...Well, using this other JSP actually already present and developed within ALUI 6.4 (I have not verified if present in earlier versions of publisher): published_content_noredirect.jsp. Instead of issuing a redirect to the published content, this jsp performs an HTTPGetRequest to it and write its content to the JSP output stream. The response code is now a simple 200 OK ("The request has succeeded" - www.w3.org) that can be cached by the portal. To enable this, simply change the HTTP url of the Published Content Web Service Object to be published_content_noredirect.jsp instead of published_content_redirect.jsp. Of course, check out the publisher redirector access logs to see the dramatic difference...under load, you will initially see a bunch of requests to published_content_noredirect.jsp, but very very fast, the access log becomes silent, all the content being really cached by portal...

    Result?? You can increase the load even more, and the page response satys the same (or is even better), and the DB utilization is not altered by that load...you simply have a site that is so much more performant. Our initial results showed that under constant intensive load (with and without the change) the publisher infrastructure would not crash anymore, the CPU usage of both DB and publisher redirectors would be consequently diminished, the number of DB requests would drop, and the load could actually be increased to new levels without loss of performance and functionality, thanks to caching.

    Since this post is already long, I am going to stop here for now...Please read next post (http://fsanglier.blogspot.com/2008/06/alui-publisher-part-2-increase_30.html) to understand the second flaw: the published_content_noredirect.jsp has a truncation bug...that is fixable of course :)

    So long!