We are in the process of setting up Blackwell's Collection Manager system to help streamline our purchasing and ordering of books. As part of the process (which I should add is still underway) we have developed a couple of simple perl scripts that might be useful to others using Collection Manager. They have not be tested in a production environment so use them at your own risk :)
1. MOUR (Mini Open Url Redirector)
Collection Manager has a user based preference to setup an open url resolver the idea is that you point this at your open url server and it will let you know if a book is already in your catalogue. Unfortunately our openurl resolver doesn't link into our library catalogue properly for books, and users would have to click through two links to even see the results. The Mini Open Url Redirector simply redirects the browser to the library catalogue search for the book concerned, no need to go through any intermediate screens.
2. Currency Split
Collection Manger exports orders to an ftp server which can be downloaded and imported into voyager. Unfortunately voyager can only handle orders which are all using the same currency. This script takes the file produced by collection manager and splits it into a bunch of files, one for each currency. These files can then be imported separately into voyager.
Thursday, March 22, 2007
Surveying Users
One of our Teams Objectives this year will require us to create at least one web survey form. There have be other instances where we have been asked to assist in creating a web surveys or tests, so I have been investigating possible options for expediting this. There are some good solutions for this purpose out there! One that struck me as extremely well thought through is PHPSurveyor. This is an open source product that is one of the best examples of this style of software development I have seen. The current stable version is PHPSurveyor 1.0 The tool appears to be robust and extremely functional with a “minimalist” interface that whilst not exactly intuitive only takes about an hour to get the hang of (It would be faster if someone was available to instruct you…).
The features of PHPSurveyor that really attract me are:
Whilst setup is not completely automated, very clear instructions are provided. In my Windows XP "test" environment, setup was straightforward and we did not experience much different in the Linux server environment we setup for production.
The survey user interface is clean of “branding” which is great!
Questions can be set to appear singularly or in groups (groups are really pages in this context).
There is a cool piece of functionality that allows questions to be dependent on an answer provided to an earlier question. This means that the user can be presented with questions that are relevant and not have to be presented with questions that they would have to ignore…
There is a ribbon gauge that advises the user of their progress through the questionnaire.
The reporting options for results and the exporting of results is straightforward.
This product is worth the trouble of implementing although users will need to have some support to initially get it to work.
The features of PHPSurveyor that really attract me are:
This product is worth the trouble of implementing although users will need to have some support to initially get it to work.
Friday, March 16, 2007
PERL Data File Search Script
In the Powershell - Get Inventory Script process I posted last month I briefly mentioned a web based server script that we use to search the data output files we copy to our intranet server. Because I have noticed a significant boost in traffic to the website hosting the Get Inventory Script, I have cleaned up and rewritten the PERL script I developed for searching these files and I am making it available here. I used PERL for this task as the process runs for us, on a Linux intranet server, which of course has PERL natively installed.
This PERL script is designed to locate all the HTML files in a single directory. These files are in fact named for the hostname of the workstations that originally generated them when they were created by the Powershell - Get Inventory Script, so the script either searches for computer names and displays the complete data sets for each computer on one page, or it searches each file to locate the data string requested and displays the results. Which type of search is in fact done, is dependant on the users choice in the original search (Hostname or Keyword). If a Keyword search is selected any HTML tags in the source code are removed, before comparing the remaining text data with the string being sort. This reduces the number of false hits!
By searching all the data files harvested (copied) from the workstations there is no need to set up a database to store the data and the interaction overhead this causes. Because we are seaching ALL the files content we are able to effectively isolate discrete data quickly. For example: If I am asked for the number of installations of EndNote we have. I can set up a Keyword search for EndNote the returning data will advise me of the number of workstations with Endnote (124 as of writing) and then list for me the names of the workstations followed by the version details of EndNote on each workstation.
Links:
Download and implement the PERL Data File Search Script.
View the Powershell - Get Inventory Script post.
This PERL script is designed to locate all the HTML files in a single directory. These files are in fact named for the hostname of the workstations that originally generated them when they were created by the Powershell - Get Inventory Script, so the script either searches for computer names and displays the complete data sets for each computer on one page, or it searches each file to locate the data string requested and displays the results. Which type of search is in fact done, is dependant on the users choice in the original search (Hostname or Keyword). If a Keyword search is selected any HTML tags in the source code are removed, before comparing the remaining text data with the string being sort. This reduces the number of false hits!
By searching all the data files harvested (copied) from the workstations there is no need to set up a database to store the data and the interaction overhead this causes. Because we are seaching ALL the files content we are able to effectively isolate discrete data quickly. For example: If I am asked for the number of installations of EndNote we have. I can set up a Keyword search for EndNote the returning data will advise me of the number of workstations with Endnote (124 as of writing) and then list for me the names of the workstations followed by the version details of EndNote on each workstation.
Links:
Sunday, March 4, 2007
A Spellchecker for Webvoyage
At ANZREG Conference a couple of weeks ago, I presented the work I did enabling spellchecking functionality for webvoyage. I was very pleased with the reaction I got with a number of people showing interest in using the same or similar system in their catalogues.
As promised I have put all the code and some limited documentation on the web. Naturally I'm not going for a Pulitzer Prize in writing, or aiming to make the documentation absolutely complete, but if you do have some input you want to make into the webpages, documentation, the code, or just want to discuss the ideas involved. Please feel from to contact me. My email address is j.brunskill AT waikato.ac.nz
Links:
As promised I have put all the code and some limited documentation on the web. Naturally I'm not going for a Pulitzer Prize in writing, or aiming to make the documentation absolutely complete, but if you do have some input you want to make into the webpages, documentation, the code, or just want to discuss the ideas involved. Please feel from to contact me. My email address is j.brunskill AT waikato.ac.nz
Links:
Combining Datafiles
Dealing with text based data extraction can be time consuming and cause real hassle, especially if you have to combine data files without causing a file to "blow out" with duplicated data. To help me with automating a couple of processes, I wrote this console application in VB .NET. The file is small (15KB) and does not require installing... But you will need to have loaded the Microsoft .NET Framework Version 1.1 on any workstation on which you want to run this application.
The zip file download of NGCombine.exe is mounted on my personal website. The downloaded zipfile will need to be opened and NGCombine.exe can be copied into the system directory of your workstation or to a directory of your choosing. If you load the file into the system directory you will not need to use a full filepath to call it.
NGCombine.exe has been limited to processing 1,000,000 records (that is lines of text) per file, which I think is plenty for most of us! Details of the call syntax follows:
NGCombine.exe
Combines the contents of two text files sorting the data and eliminating empty lines. (If applied to a single file, the file is sorted.)
Syntax NGCombine [/a [X:\...]] [/n [X:\...]] [/o [X:\...]] [/e]
Parameters
/a [X:\...]
Required: Specifies filepath for the file containing original or "Archive" data.
/n [X:\...]
Specifies filepath for the file containing "New" or incoming data. If this file is not specified, a data sort will occur on the original or "Archive" data only.
/o [X:\...]
Specifies filepath for the output data. If this file is not specified, the default output filepath is the original or "Archive" data filepath.
/e
Eliminates any duplicate lines of data.
/r
Reverses the sort order.
/?
Displays help at the command prompt.
The zip file download of NGCombine.exe is mounted on my personal website. The downloaded zipfile will need to be opened and NGCombine.exe can be copied into the system directory of your workstation or to a directory of your choosing. If you load the file into the system directory you will not need to use a full filepath to call it.
NGCombine.exe has been limited to processing 1,000,000 records (that is lines of text) per file, which I think is plenty for most of us! Details of the call syntax follows:
NGCombine.exe
Combines the contents of two text files sorting the data and eliminating empty lines. (If applied to a single file, the file is sorted.)
Syntax NGCombine [/a [X:\...]] [/n [X:\...]] [/o [X:\...]] [/e]
Parameters
/a [X:\...]
Required: Specifies filepath for the file containing original or "Archive" data.
/n [X:\...]
Specifies filepath for the file containing "New" or incoming data. If this file is not specified, a data sort will occur on the original or "Archive" data only.
/o [X:\...]
Specifies filepath for the output data. If this file is not specified, the default output filepath is the original or "Archive" data filepath.
/e
Eliminates any duplicate lines of data.
/r
Reverses the sort order.
/?
Displays help at the command prompt.
Subscribe to:
Posts (Atom)