Tuesday, November 17, 2009

Deconstructing Publisher (6.4) Rich Text Editor and Fixing the Adaptive Tag Reformatting Bug

Stating the Problem

Have you ever noticed that the Publisher Rich Text Editor (RTE) only supports a couple of Adaptive Tags: Current Time, Page Name, Community Name, and User Profiles…

I am not saying that only because no other tags show in the RTE “Adaptive Tag” button…(which is already kind of weird, but would not be so bad compared to what’s coming), but because if you want to use any other tag in the RTE source view, switching to the design view will completely mess up your tag…

Here is the example: I want to create a link to a portal page within my RTE content (no, not a common task at all)…and because I like the Adaptive Tags, I sure should use the pt:standard.openerlink tag, as follow:

<pt:standard.openerlink xmlns:pt='http://www.plumtree.com/xmlschemas/ptui/' pt:objectid='426' pt:classid='512' pt:mode='2'>A link To Portal Page</pt:standard.openerlink>


Great, I enter that in my Publisher RTE field, in the HTML source view…and going back to the RTE edit view will tranform automatically my nicely formatted (and working) tag into something useless:



<standard.openerlink pt:mode="2" pt:classid="512" pt:objectid="426" xmlns:pt="http://www.plumtree.com/xmlschemas/ptui/">A link To Portal Page</standard.openerlink>


As you can see, the RTE design view “ate” the “pt:” and messed up the closing tag as well…



Ultimately, the portal will not understand that tag anymore…and your hope to create a nice link to page using best practice Adaptive Tags is just not realistic so far.



This bug is crazy to me…I would have better understood if the above problem would be happening only with the custom tags you wrote yourself – yes, it obviously happens also with any custom tags you wrote –, but it is not even working with the default “Out of Box” ALUI Tags…I just don’t know how this could have passed QA…oh well…



Apparently that bug is fixed with Publisher 6.5 (check out the release notes), which I believe uses a completely new Rich Text Editor…But I have not personally verified if Custom Tags (the ones you or I wrote) are also working fine with the RTE in Publisher 6.5…,to be continued on this. (Let me know if you verified this)



Either way, if you are not ready to move to Publisher 6.5 quite yet, well, you have 2 options:




  • Try to use a better RTE instead of the one Publisher provides (this might be a good idea, although it has caveats too, such as uploading files in publisher, or create links to other publisher items)


  • Continue to read this post, and implement my fix.



Understanding the Why



After a couple of hours of reverse engineering (thanks to Visual Studio 2008 awesome JavaScript debugging capabilities), I finally found a fix for this issue…



But first, let me explain briefly how it all works…




  1. The publisher Servlet in charge of drawing the content item entry screen (java code) output a huge XML piece within the HTML source of the page…This XML will be the base info that will be read and transformed into Javascript objects, which will be used by the RTE buttons and various other behaviors.


  2. Part of the XML, we can see a pretty interesting one (very related to our problem): “replacementToken”



    <replacementToken index="1" class="PTRichTextToken">
    <pattern>(&amp;lt;PT:PAGENAME[^&amp;gt;]*/&amp;gt;)|(&amp;lt;PT:PAGENAME[^&amp;gt;]*&amp;gt;(.*?)&amp;lt;/PT:PAGENAME\s*&amp;gt;)</pattern>
    <replacementHTML>Page Name</replacementHTML>
    <style>background-color:#FFFF33;border:1px dashed #B9B933;</style>
    <tooltip>Name of the current portal page</tooltip>
    </replacementToken>



  3. As you can see above, the replacementToken contains a regex pattern, a replacement name, a style, and tooltip…This “token” (or more rightly, its existence in the page source) is exactly the cause that makes the only 4 ALUI Adaptive tags (Current Time, Page Name, Community Name, and User Profiles) work…

    Basically, what happens when you toggle between “Design View” and “Source View” is the following:


    1. In source view, you enter your Adaptive tag the way you should, nicely formatted the way the documentation says…


    2. When you toggle to design view, each replacement token regex pattern is executed in order to find if there are indeed any pttag in the source…


    3. Each positive find is first saved into a Javascript hashmap, and then is transformed into a proper valid HTML tag: <span>


    4. So really, in design view, the tag you entered or inserted is just a <SPAN></SPAN> tag, with a particular ID and style as defined by the token…

      Hence, for example, the pagename tag would really be in the design view something like:

      <span id=”PTRichTextToken1” style=”background-color:#FFFF33;border:1px dashed #B9B933;”>Page Name</span>

      This is why you see the dashed yellow line when you add the pagename tag using the RTE button…


    5. And if you toggle back to the source view, the replacement happens again in order to re-output the original tag you entered (using the javascript hashmap filled in step 3)





Ok, so this is overall the way RTE Publisher 6.4 is working with Adaptive Tags… Now we understand this, we need to implement our fix, right?



Delivering the How



Really, the fix is easy…and if I had the publisher java code, it would simply be a 5 minute job…All we would need to do is add all the new tokens you need (with proper regex pattern  for each one, etc…) in the Java class in charge of outputting the XML…



Well, I don’t have the code…



So the 2nd option is hacking the Javascript in order to add the new replacement tokens you want to the already built array of Replacement Tokens…simple, right?



The file that contains all these replacement mechanisms is PTControls.js (found in \imageserver\plumtree\common\private\js\jscontrols\)…and that’s the one we will modify a bit…



First, simply add the following method in there…I added it just before the method "PTRichTextControl.prototype.init" (approx line 5178) because we will need to modify that method too…



//customization: fabien sanglier
PTRichTextControl.prototype.AddCustomReplacementToken = function()
{
//make sure customtokens array have same size as tokenDecorators array...they work hand in hand.
var customtokens = new Array(
new PTRichTextToken('(<pt:customtaglib.openerlink[^>]*/>)|(<pt:customtaglib.openerlink[^>]*>(.*?)</pt:customtaglib.openerlink\s*>)','$3 (OpenerLink)'),
new PTRichTextToken('(<pt:standard.openerlink[^>]*/>)|(<pt:standard.openerlink[^>]*>(.*?)</pt:standard.openerlink\s*>)','$3 (Standard OpenerLink)')
);

//make sure tokenDecorators array have same size as customtokens array...they work hand in hand.
var tokenDecorators = new Array(
new Array('background-color:#FFFF33;border:1px dashed #B9B933;','Opener link to any objects in the portal'),
new Array('background-color:#FFFF33;border:1px dashed #B9B933;','Standard Opener link to any objects in the portal')
);

if(this.replacementTokens && this.replacementTokens.length > 0){
if(customtokens && customtokens.length > 0){
var startindex = this.replacementTokens.length;
var addDecorators = (tokenDecorators && customtokens.length == tokenDecorators.length);
for(var i = 0; i < customtokens.length; i++){
this.replacementTokens[startindex + i] = customtokens[i];
if(addDecorators){
this.replacementTokens[startindex + i].style = tokenDecorators[i][0];
this.replacementTokens[startindex + i].tooltip = tokenDecorators[i][1];
}
}
}
}
}


If you read carefully the above method, you’ll see 2 arrays: customtokens and tokenDecorators…by adding new items to these 2 arrays (which work together), you can add support for as many tags (out of the box or custom) as you want.



Next, add 1 line to the PTRichTextControl.prototype.init, which simply calls our new method above:



PTRichTextControl.prototype.init = function()
{
...

//begin customization: fabien sanglier: add the extra array for the custom tags
this.AddCustomReplacementToken();
//end customization

PTControls.makeGlobalObject(this,this.objName,true);
}


Not too bad to fix such a bad bug, right?



Note1: Make sure you clear your browser cache when you test! You might still get the old un-customized javascript…



Note2: this is non supported…make backup of original file and do it at your own risk bla bla bla usual stuff bla bla bla



Final words



Why haven’t they done this earlier? go figure!



Is it annoying to have to do this so that out of the box features works within out of the box product? Definitely!!



Was it interesting to understand this whole mess? Absolutely!



So long…

Wednesday, September 30, 2009

WCI: Varpacks and Dynamic Reload

Varpacks stands for Variable Packages…As their names show, varpacks are portal objects that do 2 main things:

  • Load values from a file into the portal memory (xml format is preferred since there is already a XMLBaseVarPack class that helps load simple XML files)
  • Help access these loaded in memory values from anywhere within the portal application – very likely from your portal “customization” code (i.e. custom activity space, custom view, PEIs, )

The goal here is not to re-explain what Varpacks are, since it is already pretty well explained in the ALUI/WCI developer documentation (4 pages):

http://download.oracle.com/docs/cd/E13174_01/alui/devdoc/docs60/Customizing_the_Portal_UI/Using_Varpacks/plumtreedevdoc_customizing_varpack_intro.htm

The goal here is to explain the following 2 points:

  1. Portal does not have always have to be restarted to load the new Varpack values from the XML file.
  2. Developed properly, a varpack can load/access “experience definition” specific values, which can be very useful when you portal powers multiple portal sites, each with their own different settings etc…

1 - First, to allow you varpack to be reloaded dynamically without the need to restart the portal, you simply need to specify it in your varpack code… and really, it is as simple as overriding the following method:

public override bool CanReloadVarPackFromUI(){
return true;
}

If the method returns true, then the varpack can be reloaded from the UI, otherwise, it cannot…and then can be reloaded only at portal startup (hence requiring portal restart if you change a value in your varpack XML file)…


And what is this UI? It is the fairly unknown “MemoryDebug” activity space (see this article from Function1 that explains how to access it: http://www.function1.com/site/2007/06/alui-portal-memory-debug-page.html).


Now, if you did everything correctly with your varpack development and deployment, you should see in the “MemoryDebug” screen your varpack name in the list of loaded varpack, and a “view” button by it…And if you returned “true” on the overridden CanReloadVarPackFromUI method as shown above, then a “Reload” button will also show up. As you imagine, clicking “reload” will launch the process that loads the values from your varpack XML file into portal memory, hence reloading your new Varpack values without having to restart the portal.


2 – Secondly, let’s imagine your single portal environment powers multiple portals, each with their own style, colors, hostname…etc…well it is easy to imagine, really: that’s what we do almost every time, and that’s what portal is for.


But now, let’s imagine that you built these customization that should be loaded only within portal site A, but not within portal site B…In other word, you want to selectively load your customizations based on criteria such as portal hostname or experience definition ID…mmmmm that is not really possible since your customizations (view replacement, PEIs, custom activity space, etc…) are dynamically loaded when the portal starts and activated directly once loaded…


But with the use of a properly developed Varpacks, you could actually achieve the above scenario easily…


What you need to do is to make your Varpack aware of the experience definition you are currently in, and then define in your XML file variable names suffixed with an experience definition ID.


Here is a sample XML file:


<?xml version="1.0" encoding="UTF-8"?>
<MyCustomVarpack>
<Section1-200>
<mycustomkey1 value="" />
<mycustomkey2 value="" />
<mycustomkey3 value="" />
<mycustomkey4 value="" />
</Section1-200>
<Section2-201>
<mycustomkey1 value="" />
<mycustomkey2 value="" />
<mycustomkey3 value="" />
<mycustomkey4 value="" />
</Section2-201>
</MyCustomVarpack>

As you can see, the sections have an ID appended to it…this ID represents the experience definition ID where the values within this section will be applicable….And as you can see, using this name formatting, you can potentially define completely different values for experience definition 201.


Now, how to make your Varpack class aware of the current experience ID you are in…well you simply need to store the experience definition ID in an instance variable of your varpack object…see the code below:


public class MyCustomUIVarPack : XMLBaseVarPack
{
private static OpenLogger log = OpenLogService.GetLogger("MyCustomUIVarPackLibrary",typeof(MyCustomUIVarPack ));

private int m_ExpId = -1;
private void SetExperienceId(int expDefId)
{
m_ExpId = expDefId;
}
... ...
... ...
}

Then, to assign the current experience definition ID to this instance variable, you need to find the experience definition ID you are in using the call: TaskAPIServerSubPortal.GetSubPortalCachedObject(oUserSession)…


And then, simply assign the ID to the variable using the defined setter SetExperienceId(int expDefId)…here is the code:


//getting the varpack instance
MyCustomUIVarPack customVarpack = (MyCustomUIVarPack)vPackMgr.GetVariablePackage(MyCustomUIVarPack.VARPACK_ID);

//getting the experience definition using the user session
IPTSubPortalInfo objSubportalinfo = TaskAPIServerSubPortal.GetSubPortalCachedObject(oUserSession);
If(null != objSubportalinfo)
{
Log.Debug("Subportal Info Object ID is: {0}",objSubportalinfo.GetObjectID());
customVarpack.SetExperienceId(objSubportalinfo.GetObjectID());
}

Ok, with that done, your varpack object is now “experience definition aware”…and as such, you can simply override the Varpack GetValueAsString / GetValueAsBoolean / GetValueAsInt methods, appending the experience definition ID to the varpack key you want to get the value from.


I hope this article will give you a couple of new ideas in regards of Varpacks…but really will make you realize (if not already) that the WCI Varpack framework is definitely pretty cool and powerful.


As always, let me know what you think…and let me know if you used varpacks to implement other cool use cases…

Wednesday, September 2, 2009

WCI Documentation: Where are you now?

It’s been quite a while since my last post…yes I admit…I have been slacking off with my blog! And just to make sure I don’t injure myself with too sudden overexertion, this post will be short…but hopefully useful!

I just noticed that http://edocs.bea.com finally died…and now you get redirected to the oracle site…

So where is my good old beloved “AquaLogic User Interaction (ALUI) Development Documentation” now? And all the rest for that matter?

After a couple of minutes of stupor and fear, I finally found it:

http://download.oracle.com/docs/cd/E13174_01/alui/devdoc/docs60/index.html

The whole aqualogic documentation is now accessible from:

http://www.oracle.com/technology/documentation/aqualogic_interact.html

Ouf, safe! I can now relax again!

It could have been nice if they had created a simple auto redirect to the new link…oh well…

So long!

--EDIT

Ok quick update to this post: I just noticed that some of the links in the left side menu of the developer documentation have not been updated properly…For example the links to the API still go to edocs…

But do not worry: it seems that only the left nav menu is not updated…but the API has definitely been carried over and can be accessed from:

Portal APIs:
http://download.oracle.com/docs/cd/E13174_01/alui/devdoc/docs60/References/Portal_API_Documentation.htm

Public APIs
http://download.oracle.com/docs/cd/E13174_01/alui/devdoc/docs60/References/api_index.htm

Ok that’s it, promise :)

Tuesday, June 9, 2009

Adding JavaScript includes to all portal browsing pages

In almost every portal deployment, we usually have the need to create some JavaScript libraries that can be used by various components…possible functions could be window openers, url encoder/decoder or any other utility functions that you might not want to include in all your portlets.

Now, the question is: how do you add it to the portal page as (a) global include(s)? 2 possibilities:

  1. Add the JavaScript include(s) to the portal header/footer portlets (which is usually done in publisher, and displayed on all pages)
  2. Create a small UI customization that does the job more reliably.

You might have guessed it now: option 1 is not the one I am going to explain here. Why?

First, option 1 would not be worth a blog post since it is fairly easy to implement. But more seriously, depending on header or footer portlets is not reliable in order to include global resources. Indeed, different portlet headers/footers are displayed based on the current experience definition, and/or the current community you are in. Also, the header/footer portlet content can easily be modified (especially if implemented in publisher for example) which increase the chances of removing the resource include(s).

Ok, let’s dive into option 2!

By customizing the 2 main browsing portal DP (stands for Display Page) classes (MyPortalDP and GatewayHostedDP), you will be able to include all the JavaScript resource you need on virtually all end user portal pages: all portal community pages and on the gateway page in hosted display mode.

Luckily, those DP classes provide a method to override: DisplayJavaScriptFromChild. This method returns an HTMLScriptCollection object that will be read in order to add the JavaScript to the header tag (in between the portal <head></head> tags).

The following could be what you might want to have in this method override (it gets the list of JavaScript files to include from a Varpack, and iterate through this list in order to add it to the returned HTMLScriptCollection object)

protected override HTMLScriptCollection DisplayJavaScriptFromChild()
{
HTMLScriptCollection scriptCollection = base.DisplayJavaScriptFromChild();
try
{
//get all the JS to include
XPArrayList arrJSToInclude = ...Getting this from Varpack is a good idea, and as such, recommended...;
if(null != arrJSToInclude && arrJSToInclude.GetSize() > 0)
{
IXPEnumerator jsEnum = arrJSToInclude.GetEnumerator();
string src = "";
while(jsEnum.MoveNext())
{
src = (string)jsEnum.GetCurrent();
if(!"".Equals(src))
{
HTMLScript script = new HTMLScript(HTMLScript.TYPE_JAVASCRIPT);
src = src.Replace("pt://images/", ConfigHelper.GetImageServerRootURL(m_asOwner));
script.SetSrc(src);
log.Debug("Adding JS external file with src: {0}", src);
scriptCollection.AddInnerHTMLElement(script);
}
}
}
}
catch (Exception exc)
{
log.Error(exc, "An exception occurred while adding the custom javascript to the page head.");
}
return scriptCollection;
}


I never recommend customizing directly the portal classes...that way it is a bit simpler if you need to upgrade the portal version: you custom code is not everywhere.



So instead of customizing directly MyPortalDP and GatewayHostedDP, create new custom classes that inherit from those 2...then make sure your custom classes are loaded properly in the related Activity Spaces (PlumtreeAS and GatewayAS)...



Hope that is helpful!

Monday, May 11, 2009

ALUI Webcenter Grid Search: Maintaining the search cluster repository without loss of service

Sometimes, for maintenance reasons, the cluster repository has to be unavailable for a short amount of time (I.e. the NAS on which it is hosted is patched and need to be restarted, or the central search cluster repository needs to be moved from one server to another, etc…).

The main problem is that search relies heavily on the availability of search cluster repository in order for the portal content to be properly indexed (for example, the cluster registers any new index delta and ensures that these delta are redistributed to each node’s local index, that way the search nodes are always in sync with one another). Unfortunately, I’ve noticed many times that when the cluster becomes not available for as short as a couple of seconds, the search infrastructure does NOT handle this gracefully…

At best, the nodes all go in “read-only” mode automatically (meaning the nodes act only as query service instead of query+index service), corrupting most of the time the process handles of the node being the “indexer” at the time of disconnection (you’ll see “invalid handles” errors in the search status screen for example)…at worse, all the nodes shutdown with a great “out of memory” error. Both scenario is not a good one, since it will not go back in run mode automatically after the disruption is over, and will probably require a overall restart of nodes.

If you need to do this in PROD where uptime is usually a strong requirement, then the idea is to do such an operation without jeopardizing the search capabilities of your portal site(s). Indeed, search being so central to portal, when down, many portlets or components relying on search will be down, and overall service will be pretty degraded.

Fortunately, search comes with a powerful admin utility: cadmin.exe. You can find it on any of your search nodes, usually at the following path:

<pt_search_home>/bin/native/cadmin.exe

Using the tool, you can gracefully put all the search nodes in “read-only” mode before the maintenance operation. Indeed, when the nodes are in read-only mode, each node act as a “disconnected” query service, providing search results solely based on their local index. While in that state, the search cluster can be fully unavailable… and apart from any new content not being indexed, end users will not see any search disruption.

So here are the commands you would want to perform, either manually or in a batch:

cadmin runlevel readonly –-> this puts all the nodes of the cluster in readonly mode
cadmin status –-verbose –-> this give you the status of the cluster. useful to make sure the previous operation worked as expected.

…perform your maintenance operation…

cadmin runlevel run –> put all the nodes on the cluster back in run mode
cadmin status –verbose –> sanity check…

What you would want to do before any search maintenance is perform a search checkpoint (in other word search backup). 2 options: doing it manually using the Admin UI, or using the cadmin tool as follows:

cadmin checkpoint --create

As an extension of this, you can check out the other operations you can perform with the cadmin tool (pretty much everything you can do with the search cluster admin UI, with more power added to it) by entering cadmin –help

Hope that helps. Until next time, take care!

Sunday, March 15, 2009

ALUI Publisher: Easy bug fix with portlet templates…

I know ALUI Publisher is on the down slope, but until then, it does not mean I should not share some of my findings…Lately, I found something in publisher that does not make sense to me. It has to do with the creation of portlets based on publisher portlet templates. Let me explain the problem…and then the quick fix…

The problem:

Out of the box, when you create a portlet based on a publisher portlet template (i.e. announcement, news, etc…), there is a screen where you are asked to choose a publisher folder where the portlet publisher content should be…When clicking on the “chose publisher folder”, a publisher tree pops up, and you can pick the publisher folder. The tree picker will only show you the folders where you have “producer” access…until now it makes sense since only the “producer” role and above can create folders in publisher.

But what if a user does have “producer” access to a subfolder Z located in the tree structure at X > Y > Z…but does not have producer access to the parent folder X and Y?? Then the tree picker would simply stop showing the tree at X, hence not showing the subfolder to which the user has actually access to…hence problem.

A perfect (and probably common) use case is where you have various publisher sites (i.e. an intranet site, and internet site, etc…) and within each of these, you have various “community-related” publisher folders… (folders that contain the web content of each community). You will want your community administrators (in the portal) to also have “producer” or “folder administrator” role in their “community-related” publisher folder…but not have these roles in the parent publisher folders…they should have “reader” access on these parent folders…

The solution:

By reverse engineering publisher one more time, I found out that this problem can be fixed really easily…basically the publisher tree picker is opened with the following url:

“../folderpicker_frame.jsp?sid="+sessionId+"&showItemCategory=0&itemId="+parentFolderId+"&isMultiSelect=false&rootIsCheckable=true&minRoleId=12”

What is interesting in this url is the last parameter: minRoleId=12. What it probably means is “show only the folders to the users who have at least a role 12 – producer role – assigned to it.” That’s it, we have our solution…By removing this parameter altogether, the tree will now show the folders to which the user has minimum access to (“reader” access), hence fixing the user case explained above:

A producer in folder X > Y > Z will now be able to browse down to the Z folder, where he will be able to create a folder.

And don’t worry, it does not impact core security at all. If the user tries to select folder X or Y as a container for his “announcement” portlet, he will simply receive an error message saying ”you do not have enough access to create a folder”…so no problem.

Detailed instructions:

  1. Make a backup of the publisher application files \bea\alui\ptcs\6.4\webapp\ptcs.ear and \bea\alui\ptcs\6.4\webapp\ptcs.war (obvious, no?)
  2. Unpack the publisher archive (ptcs.war)
    1. navigate to \bea\alui\ptcs\6.4\webapp folder
    2. create a new dir: ptcs
    3. navigate to that new dir and execute the following jar command: jar –xvf ../ptcs.war
  3. edit the extracted file: ./portlet_packages/portlet_create.jsi
  4. Find the JavaScript functions “ChooseParentFolder” and “NewParentFolder”
  5. Remove the “minRoleId=12” from the “var url = …” line (1st line in each functions)
  6. Still in \bea\alui\ptcs\6.4\webapp\ptcs folder, repackage the war by executing: jar -cvf ptcs.war *
  7. move the newly created war to the \bea\alui\ptcs\6.4\webapp folder: move ptcs.war ../
  8. navigate to \bea\alui\ptcs\6.4\webapp: cd ../
  9. update the ptcs.ear file with the following command: jar -uvf ptcs.ear ptcs.war

That’s it…

Before restarting publisher, clean the temp files from the publisher container folders just to make sure your modification is loaded properly.

  • \ptcs\6.4\container\tmp\deploy
  • \ptcs\6.4\container\work\jboss.web\localhost

As usual, do this at your own risk…and test it well before deploying to production :)

Monday, February 23, 2009

New Portal Tools On ALUI Toolbox

In my efforts of improving the ALUI/Webcenter portal, and especially enhancing its admin management capabilities, I created over time a set of utilities that I think could be useful to the ALUI/Webcenter community. In one of my previous post, I already talked about the "PT URL Replace" utility (refer to: http://fsanglier.blogspot.com/2008/01/alui-administration-tool-for.html) which was already on the ALUI Toolbox Google code project as a download only. What I did some days ago is updating the ALUI Toolbox Google project (http://code.google.com/p/alui-toolbox/) with the "PT URL Replace" code, as well as a couple of new applications/utilities:

  • Portlet Caching Clearer (web portlet - c#)
  • Object Identifier (web portlet - c#)
  • Web Service Changer (web portlet - c#)
  • Page Lister (web portlet - c#)
  • ALUI Knowledge Directory Security Agent (java app runnable from console and/or scheduled task such as ALUI jobs) – feature download at http://code.google.com/p/alui-toolbox/
  • Improvements for PT URL Replace utility – http://code.google.com/p/alui-toolbox/

All the above apps use the Server API (for Server API introduction, refer to previous posts:) because the tasks performed would not be possible by simply using the IDK. It has been written for ALUI 6.1.x versions and might not work fully on version 6.5 and above without minor changes (because the server API does change between portal releases). When I get the chance, I will update the Google code project with 2 extra branches that follows the more current portal versions.

The code is released under the GPL license (you can find a copy the the GPL license in the root folder of the apps, or go to http://www.gnu.org/licenses/gpl.html), and off course is provided without any kind of warranty...

I'll go quickly over each of these utilities in order for you to understand why they might be useful.

Portlet Caching Clearer:
In order to maximize performance, each portlet in the portal can have output caching enabled to a particular timeframe. The drawback of such caching mechanism is that the updates performed by content managers are not instantaneously viewable to end users. This portlet that can be added on your "My Page" allows you to clear portlet caching in 3 different ways:

  1. Using the portal tree picker, select the portlet(s) you want to clear.
  2. Using the portal tree picker, select the "Community Page(s)" whose portlets you want to clear. The utility will find all the portlets currently added to the selected pages, and clear the cache for each of them.
  3. Using the portal tree picker, select the "Community(ies)" you want to clear. The utility will find all the portlets currently added to the selected pages of the selected communities, and clear the cache for each of them.

That way the content managers can easily clear caching for a set of pages, communities, or portlets...

Object Identifier:

Let's say that you have a portlet application that identify a portal object in its config file by its UUID (or its ID for that matter)...Now let's say you come back to that portlet 6 month later because you need to perform an improvement and/or fix something...Unless you clearly documented what object corresponds to this ID/UUID (and where it is in the portal), it will not be easy to find it (unless you can easily run a DB query...). This portlet basically answer that needs:

  • Provide a UUID and it will tell you the corresponding Classid/ObjectID pair + Object Name + Object Location in the Portal Admin Hierarchy.
  • Provide a Classid/ObjectID pair, and it will tell you the corresponding UUID + Object Name + Object Location in the Portal Admin Hierarchy.

Web Service Changer:

Have you ever noticed that you cannot change the webservice attached to a portlet once you initially picked it and created the portlet? Now let's say that you have a bunch of publisher portlets that are all tied to the same "Publish Content Web Service" object. But all a sudden, you change your mind and decide to have some portlets that should be tied to 2 different "Publish Content Web Service" objects, each one with a different caching timeframe (i.e. a long caching for the content that hardly change, and a shorter caching for the content that changes often). In the portal out of the box, you cannot do that easily, and will probably have to recreate all the portlets and re-attach them to the same publisher content etc...(big pain).

Well this portlet allows you to easily change the "portlet web service" OR the "portlet template" attached to a particular portlet or group of portlets:

  • Using the portal tree picker, pick the portlet you want to change.
  • Using the portal tree picker, pick the web service OR portlet template that you want these portlet to be assigned to.
  • Click Submit...Done.

Page Lister:

This simple utility allows to display as a list of HTML links (simple <a href=""></a> tags) all the Community Pages in the portal that a certain users have access to...

Why doing this? it is a simple way of creating a Sitemap that a web "crawler" (either ALUI web crawler, or Google appliance etc...) could hit in order to be dynamically aware of all the ALUI pages of your site that are accessible to the guest users for example...Or that could also be used as a security monitoring tool in order to verify which pages are accessible to a particular user (i.e. Guest) and ensure it is not a security mistake etc...

ALUI Knowledge Directory Security Agent:

This utility, written in java (we have to think about our linux/unix user base too :) ), was created as a scheduled task agent in order to act as a security cop in the knowledge directory. What it does is go through all the KD folders the agent user has access to, and automatically assign the found cards with their parent folder security.

All you need to provide are:

  1. KD folder ID to start with,
  2. A user ID / password (or session token if you run as a portal "external operation") the agent should impersonate with,
  3. (Optional) CrawlerIDs (if you want to change only the cards that were brought into KD by a specific set of crawlers)

2 main use cases I see for this tool:

  • Use it as a "cop" background schedule job in order to ensure the security on the cards is always right, based on the security of the folder.
  • In the event you use a single crawler with various filters that organize your content in various folders. Without this agent, the crawled cards will get the security defined in the crawler's "crawled content permission" section...and that might not be in phase with the security of the various KD folders the content will be organized into (due to filters). If you run this cop security agent after each crawl, then you are all set based on the destination folder, not on the crawler's "crawled content permission" section...

That's it for now. I hope you'll test them out and let me know if you think these are useful or not.
Don’t hesitate to give ideas and/or wish lists of things which would be good to have in the portal...and don't hesitate to share the cool utilities you've done too

I or others will be updating the project every so often so keep informed or you might miss out on some good resources :)

Sunday, January 18, 2009

WebCenter Native API Development: Advanced Search Query explained (and applied to WebSite Search)

In his article about "Dot Com Portals: Smart Searching", Jordan Rose already explained really well how to easily implement with Webcenter Interaction an efficient and accurate website-like search (enter a search term, and expect in the results either website pages, or documents within these pages).

To summarize the challenge:

The very powerful WebCenter search component will index everything the user has access to (i.e. web content items, documents, crawled third party websites, etc…) independently from their real presence on the web site pages. For example, the search results would present web content items instead of the website pages where the web content item is displayed through a portlet.

To summarize a bit the solution:

Using WebCenter interaction “web crawler” capability (google like spider that follows links on a page and index its content for future searches) coupled with experience definition features (to hide the part of the page that we don’t want the crawler to index, like the top/left navigation, the banner, etc…), it is easy to actually implement a website-like search with accurate portal page results (Refer to Jordan's blog post:Dot Com Portals: Smart Searching)

But one thing that was not there yet in the solution was: "How can the crawler navigate from page to page" if the navigation is not there? What we did at first (we did not have time to do better) is create manually this HTML file that would contain all the pages of the website, and direct the web crawler to that page, instead of to the root of the public portal website url.

This would work ok, but would require a manual update of this file each time you create a new page...not super practical. Anyway, I finally took the time to improve it, and created this "Page Listing" code that basically render a list of pages located within a specific folder...Basically, you simply create a request with "topfolderid", "includesubfolders", and "openerhost" (http:////PageLinkListing?topfolderid=123&includesubfolders=true&openerhost=yourdomainhost) and the dotnet page will render all the portal page links that correspond to these values.

In this article I'd like to use this example in order to focus on the Native Search API (refer to my previous article about the native API) because the dotnet frontend is pretty simple:

  • DotNet front end page
  • Native Portal API
  • Webcenter search API to query the pages

First as always, it all starts with the native session creation…then, that’s when you can start creating the search request object:

IPTSearchRequest req = m_ptSession.GetSearchRequest();

From there, the PTSearchRequest object allows you to set all sorts of setting that will define the search you want to make. Simply call the SetSettings method. This method takes a setting ID and a value (that can be a string, int, or array of objects). The main problem is the non-documentation of this API (native API is non documented)…but luckily, the setting IDs are all available through the PT_SEARCH_SETTING class, and each name is relativelly straightforward (not always though). Check out the example below that sets the fields to return, specify not to execute best bet and spell check, and the maximum number of results to bring back:




   1: int[] arPropIDs = { PT_INTRINSICS.PT_PROPERTY_OBJECTID, PT_INTRINSICS.PT_PROPERTY_OBJECTNAME, PT_INTRINSICS.PT_PROPERTY_OBJECTSUMMARY};

   2: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_RET_PROPS, arPropIDs);

   3: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_INCLUDE_USUAL_FIELDS, false);

   4: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_KWIC, false);

   5: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_BESTBETS, false);

   6: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_SPELLCHECK, false);

   7: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_SKIPRESULTS, 0);

   8: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_MAXRESULTS, 10000);



You can also specify the admin folders (or KD folders) within which the search should be performed:




   1: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_ADMINFOLDERS, new int[] { adminfolderid });



and the object type the search should be dealing with (here we want to search only community pages, but very similarly to the object type checkboxes in the advanced search interface, you could pick several object type to search for):




   1: req.SetSettings(PT_SEARCH_SETTING.PT_SEARCHSETTING_OBJTYPES, new int[] { PT_CLASSIDS.PT_PAGE_ID });



Finally, you can create all sorts of filters statements that you can add to this search request. It works very similarly to the snapshot query interface: A filter can contain several “Filter Clauses” and each clause can contain several “Filter Statements”. Clauses and Statements can be put together using “OR” or “AND” operations.


Here for this exercise, we will look for objects with ID greater than 230 and name containing “Test”… (kind if useless query…but that’s not the point here…)




   1: // Create a filter for the search request which will "AND" together each filter clause.

   2: IPTFilter ptFilter = PortalObjectsFactory.CreateSearchFilter();

   3: ptFilter.SetOperator(PT_BOOLOPS.PT_BOOLOP_AND);

   4:  

   5: //Create the clause that will contains the statements we need for the query

   6: IPTPropertyFilterClauses ptFilterClause = (IPTPropertyFilterClauses) ptFilter.GetNewFilterItem(PT_FILTER_ITEM_TYPES.PT_FILTER_ITEM_CLAUSES);

   7: // The filter clause should "AND" each of the statements.

   8: ptFilterClause.SetOperator(PT_BOOLOPS.PT_BOOLOP_AND);

   9:  

  10: //Statement 1: ObjectID > 230

  11: IPTPropertyFilterStatement statement1 = (IPTPropertyFilterStatement)filter.GetNewFilterItem(PT_FILTER_ITEM_TYPES.PT_FILTER_ITEM_STATEMENT);

  12: statement1.SetOperand(PT_INTRINSICS.PT_PROPERTY_OBJECTID);

  13: statement1.SetOperator(PT_FILTEROPS.PT_FILTEROP_GT);

  14: statement1.SetValue(230);

  15:  

  16: //Statement 2: Object Name contains the text "Test"

  17: IPTPropertyFilterStatement statement2 = (IPTPropertyFilterStatement) ptFilter.GetNewFilterItem(PT_FILTER_ITEM_TYPES.PT_FILTER_ITEM_STATEMENT);

  18: //search on the name property.

  19: statement2.SetOperator(PT_FILTEROPS.PT_FILTEROP_CONTAINS);

  20: statement2.SetValue("Test");

  21:  

  22: //add statements to clause

  23: ptFilterClause.AddItem(statement1, ptFilterClause.GetCount());

  24: ptFilterClause.AddItem(statement2, ptFilterClause.GetCount());

  25:  

  26: //add clause to filter

  27: ptFilter.SetPropertyFilter(ptFilterClause);



As you can see it is very powerful and straightforward, and allows you to perform all sort of searches that fit your needs.


Finally, when you are done with the search parameters and filters, you simply need to execute the query, and get the results back…




   1: IPTSearchQuery query = req.CreateAdvancedQuery(filter);

   2: IPTSearchResponse ptPagesResponse = req.Search(query);

   3: int nResultCount = ptPagesResponse.GetResultsReturned(); 

   4: for (int nIndex = 0; nIndex < nResultCount; nIndex++) { 

   5:     //do something with the data... 

   6:     ptPagesResponse.GetFieldsAsInt(nIndex, PT_INTRINSICS.PT_PROPERTY_OBJECTID));

   7:     ptPagesResponse.GetFieldsAsString(nIndex, PT_INTRINSICS.PT_PROPERTY_OBJECTNAME));

   8:     ptPagesResponse.GetFieldsAsString(nIndex, PT_INTRINSICS.PT_PROPERTY_OBJECTSUMMARY));

   9: }



Here it is, I hope you see the endless possibilities you now have using the native search API in your various Native API Utilities (Portlet, Console application, etc…). I will soon post on the ALUI Toolbox google project the integrality of this code plus many other extras. Stay tune, and Happy new year! :)