[Mulgara-svn] r1311 - in trunk: data/fullTextTestData src/jar/resolver-lucene src/jar/resolver-lucene/java/org/mulgara/resolver/lucene

ronald at mulgara.org ronald at mulgara.org
Tue Oct 14 13:53:11 UTC 2008


Author: ronald
Date: 2008-10-14 06:53:10 -0700 (Tue, 14 Oct 2008)
New Revision: 1311

Added:
   trunk/data/fullTextTestData/data.n3
   trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverUnitTest.java
Modified:
   trunk/src/jar/resolver-lucene/build.xml
   trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndex.java
   trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndexUnitTest.java
   trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolver.java
   trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverFactory.java
   trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/ReadOnlyLuceneResolver.java
Log:
Make lucene-resolver transaction aware.

With this commit not only are changes committed or rolled back as appropriate,
but changes made in one transaction are not visible to other transactions till
they are committed, and then only to new transactions. This basically provides
snapshot isolation, but with one major caveat: changes are also not visible to
the transaction that made the change! E.g. a query following an insert in the
same transaction will not return any results that include the just inserted
data (in the lucene-resolver).

One possible way around the above caveat may be to create a new index for every
transaction, and then on a commit merge the index back into the main index; it
is unclear right now how costly that would be in terms of performance, though.

Because IndexReader's and IndexWriter's are now kept open for the duration of
the transaction, applications that do multiple queries (and/or writes) in a
transaction should see a noticeable performance improvement.

Added: trunk/data/fullTextTestData/data.n3
===================================================================
--- trunk/data/fullTextTestData/data.n3	                        (rev 0)
+++ trunk/data/fullTextTestData/data.n3	2008-10-14 13:53:10 UTC (rev 1311)
@@ -0,0 +1,22 @@
+<foo:node1> <foo:hasText> "AACP Pneumothorax Consensus Group" .
+<foo:node2> <foo:hasText> "ALS-HPS Steering Group" .
+<foo:node3> <foo:hasText> "ALSPAC (Avon Longitudinal Study of Parents and Children) Study Team" .
+<foo:node4> <foo:hasText> "ALTS Study group" .
+<foo:node5> <foo:hasText> "American Academy of Asthma, Allergy and Immunology" .
+<foo:node6> <foo:hasText> "American Association for the Surgery of Trauma" .
+<foo:node7> <foo:hasText> "American College of Chest Physicians" .
+<foo:node8> <foo:hasText> "Antiarrhythmics Versus Implantable Defibrillator (AVID) Trial Investigators" .
+<foo:node9> <foo:hasText> "Antibiotic Use Working Group" .
+<foo:node10> <foo:hasText> "Atypical Squamous Cells Intraepithelial" .
+<foo:node11> <foo:hasText> "Lesion Triage Study (ALTS) Group" .
+<foo:node12> <foo:hasText> "Australasian Society for Thrombosis and Haemostasis (ASTH) Emerging Technologies Group" .
+<foo:node13> <foo:hasText> "Benefit Evaluation of Direct Coronary Stenting Study Group" .
+<foo:node14> <foo:hasText> "Biomarkers Definitions Working Group." .
+<foo:node15> <foo:hasText> "Canadian Colorectal Surgery DVT Prophylaxis Trial investigators" .
+<foo:node16> <foo:hasText> "Cancer Research Campaign Phase I - II Committee" .
+<foo:node17> <foo:hasText> "Central Technical Coordinating Unit" .
+<foo:node18> <foo:hasText> "Clinical Epidemiology Group from the French Hospital Database on HIV" .
+<foo:node19> <foo:hasText> "CNAAB3005 International Study Team" .
+<foo:node20> <foo:hasText> "Commissione ad hoc" .
+<foo:node21> <foo:hasText> "Committee to Advise on Tropical Medicine and Travel" .
+<foo:node22> <foo:hasText> "Comparison of Candesartan and Amlodipine for Safety, Tolerability and Efficacy (CASTLE) Study Investigators" .


Property changes on: trunk/data/fullTextTestData/data.n3
___________________________________________________________________
Name: svn:keywords
   + Id HeadURL Revision
Name: svn:eol-style
   + native

Modified: trunk/src/jar/resolver-lucene/build.xml
===================================================================
--- trunk/src/jar/resolver-lucene/build.xml	2008-10-14 13:53:01 UTC (rev 1310)
+++ trunk/src/jar/resolver-lucene/build.xml	2008-10-14 13:53:10 UTC (rev 1311)
@@ -23,8 +23,10 @@
 
     <fileset file="${query.dist.dir}/${query.jar}"/>
     <fileset file="${resolver-spi.dist.dir}/${resolver-spi.jar}"/>
+    <fileset file="${resolver.dist.dir}/${resolver.jar}"/>
     <fileset file="${tuples.dist.dir}/${tuples.jar}"/>
     <fileset file="${util.dist.dir}/${util.jar}"/>
+    <fileset file="${querylang.dist.dir}/${querylang.jar}"/>
 
     <fileset file="${lib.dir}/${lucene.core.jar}"/>
   </path>
@@ -34,6 +36,16 @@
     <path refid="resolver-lucene-classpath"/>
 
     <fileset file="${resolver-lucene.dist.dir}/${resolver-lucene.jar}"/>
+    <fileset file="${store-stringpool-xa11.dist.dir}/${store-stringpool-xa11.jar}"/>
+    <fileset file="${util-xa.dist.dir}/${util-xa.jar}"/>
+    <fileset file="${resolver-store.dist.dir}/${resolver-store.jar}"/>
+    <fileset file="${store-nodepool-memory.dist.dir}/${store-nodepool-memory.jar}"/>
+    <fileset file="${store-stringpool-memory.dist.dir}/${store-stringpool-memory.jar}"/>
+    <fileset file="${tuples-hybrid.dist.dir}/${tuples-hybrid.jar}"/>
+    <fileset file="${resolver-memory.dist.dir}/${resolver-memory.jar}"/>
+    <fileset file="${content-n3.dist.dir}/${content-n3.jar}"/>
+    <fileset file="${resolver-file.dist.dir}/${resolver-file.jar}"/>
+    <fileset file="${lib.dir}/${antlr.jar}"/>
   </path>
 
   <target name="resolver-lucene-clean"

Modified: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndex.java
===================================================================
--- trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndex.java	2008-10-14 13:53:01 UTC (rev 1310)
+++ trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndex.java	2008-10-14 13:53:10 UTC (rev 1311)
@@ -127,12 +127,6 @@
   /** Analyzer used for writing and reading */
   private Analyzer analyzer = getAnalyzer();
 
-  /**
-   * Locking object to stop multiple threads performing inserts, deletes and
-   * optimizations are the same instance.
-   */
-  private IndexLock indexLock = new IndexLock();
-
   private String name;
 
   /**
@@ -144,12 +138,12 @@
    * @param newName 
    * @throws FullTextStringIndexException Failure to initialize write index
    */
-  public FullTextStringIndex(String directory, String newName)
+  public FullTextStringIndex(String directory, String newName, boolean forWrites)
       throws FullTextStringIndexException {
     name = newName;
     enableReverseTextIndex = System.getProperty(
         "mulgara.textindex.reverse.enabled", "false").equalsIgnoreCase("true");
-    init(directory);
+    init(directory, forWrites);
   }
 
   /**
@@ -162,37 +156,33 @@
    * @throws FullTextStringIndexException Failure to initialize write index
    */
   public FullTextStringIndex(String directory, String newName,
-      boolean newEnableReverseTextIndex) throws FullTextStringIndexException {
+      boolean newEnableReverseTextIndex, boolean forWrites) throws FullTextStringIndexException {
     name = newName;
     enableReverseTextIndex = newEnableReverseTextIndex;
-    init(directory);
+    init(directory, forWrites);
   }
 
   /**
    * Initialize the store.
    *
    * @param directory the directory to put the index files.
+   * @param forWrites Whether to open an index writer
    * @throws FullTextStringIndexException Failure to initialize write index
    */
-  private void init(String directory) throws FullTextStringIndexException {
-    synchronized (indexLock) {
-      try {
-        Lock lock = FSDirectory.getDirectory(TempDir.getTempDir().getPath()).makeLock(name + ".lock");
+  private void init(String directory, boolean forWrites) throws FullTextStringIndexException {
+    try {
+      Lock lock = FSDirectory.getDirectory(TempDir.getTempDir().getPath()).makeLock(name + ".lock");
 
-        synchronized (lock) {
-          lock.obtain();
+      synchronized (lock) {
+        lock.obtain();
 
-          // create/open the indexes for reading and writing.
-          initialize(directory);
+        // create/open the indexes for reading and writing.
+        initialize(directory, forWrites);
 
-          //set the status to not modified
-          indexLock.setStatus(indexLock.NOT_MODIFIED);
-
-          lock.release();
-        }
-      } catch (IOException ioe) {
-        throw new FullTextStringIndexException(ioe);
+        lock.release();
       }
+    } catch (IOException ioe) {
+      throw new FullTextStringIndexException(ioe);
     }
   }
 
@@ -334,15 +324,7 @@
       indexDocument.add(new Field(SUBJECT_KEY, subject, Field.Store.YES, Field.Index.NOT_ANALYZED));
 
       try {
-        //lock any deletes, adds and optimize from the index
-        synchronized (indexLock) {
-          // add to writer
-          indexer.addDocument(indexDocument);
-
-          // Update the status of the index
-          indexLock.setStatus(indexLock.MODIFIED);
-        }
-
+        indexer.addDocument(indexDocument);
         added = true;
       } catch (IOException ex) {
         logger.error("Unable to add fulltext string subject <" + subject + "> predicate <" +
@@ -418,15 +400,7 @@
     indexDocument.add(new Field(SUBJECT_KEY, subject, Field.Store.YES, Field.Index.NOT_ANALYZED));
 
     try {
-      //lock any deletes, adds and optimize from the index
-      synchronized (indexLock) {
-        // add to writer
-        indexer.addDocument(indexDocument);
-
-        // Update the status of the index
-        indexLock.setStatus(indexLock.MODIFIED);
-      }
-
+      indexer.addDocument(indexDocument);
       added = true;
     } catch (IOException ex) {
       logger.error("Unable to add fulltext string subject <" + subject + "> predicate <" +
@@ -464,16 +438,8 @@
     }
 
     try {
-      //lock any deletes, adds and optimize from the index
-      synchronized (indexLock) {
-        // add to writer
-        indexer.addDocument(indexDocument);
-
-        // Update the status of the index
-        indexLock.setStatus(indexLock.MODIFIED);
-
-        added = true;
-      }
+      indexer.addDocument(indexDocument);
+      added = true;
     } catch (IOException ex) {
       logger.error("Unable to add " + indexDocument + " to fulltext string index", ex);
       throw new FullTextStringIndexException("Unable to add " + indexDocument + " to fulltext string index", ex);
@@ -501,33 +467,27 @@
 
     //Delete the directory if it exists
     if (luceneIndexDirectory != null) {
-      //lock any deletes, adds and optimize from the index
-      synchronized (indexLock) {
-        //Close the reading and writing indexes
-        close();
+      //Close the reading and writing indexes
+      close();
 
-        try {
-          Lock lock = FSDirectory.getDirectory(TempDir.getTempDir().getPath()).makeLock(name + ".lock");
+      try {
+        Lock lock = FSDirectory.getDirectory(TempDir.getTempDir().getPath()).makeLock(name + ".lock");
 
-          synchronized (lock) {
-            lock.obtain();
+        synchronized (lock) {
+          lock.obtain();
 
-            //Remove all files from the directory
-            for (String file : luceneIndexDirectory.list()) {
-              luceneIndexDirectory.deleteFile(file);
-            }
+          //Remove all files from the directory
+          for (String file : luceneIndexDirectory.list()) {
+            luceneIndexDirectory.deleteFile(file);
+          }
 
-            //Remove the directory
-            deleted = new File(indexDirectoryName).delete();
+          //Remove the directory
+          deleted = new File(indexDirectoryName).delete();
 
-            //set the status to not modified
-            indexLock.setStatus(indexLock.NOT_MODIFIED);
-
-            lock.release();
-          }
-        } catch (IOException ioe) {
-          throw new FullTextStringIndexException(ioe);
+          lock.release();
         }
+      } catch (IOException ioe) {
+        throw new FullTextStringIndexException(ioe);
       }
     }
 
@@ -561,18 +521,9 @@
 
     try {
       Term term = new Term(ID_KEY, key);
+      indexer.deleteDocuments(term);
+      removed = true; // TODO: could use docCount(), but that seems overly expensive
 
-      //lock any deletes, adds and optimize from the index
-      synchronized (indexLock) {
-        //check the status of the index
-        indexer.deleteDocuments(term);
-
-        //set the index status to modified
-        indexLock.setStatus(indexLock.MODIFIED);
-
-        removed = true; // TODO: could use docCount(), but that seems overly expensive
-      }
-
       if (logger.isDebugEnabled()) {
         if (removed) {
           logger.debug("Removed key '" + key + "' from fulltext string pool");
@@ -603,13 +554,7 @@
     try {
       QueryParser parser = new QueryParser(ID_KEY, analyzer);
       parser.setAllowLeadingWildcard(true);
-      synchronized (indexLock) {
-        //check the status of the index
-        indexer.deleteDocuments(parser.parse("*"));
-
-        //set the index status to modified
-        indexLock.setStatus(indexLock.MODIFIED);
-      }
+      indexer.deleteDocuments(parser.parse("*"));
     } catch (ParseException ex) {
       logger.error("Unexpected internal error", ex);
       throw new FullTextStringIndexException("Unexpected internal error", ex);
@@ -626,6 +571,10 @@
    *      index.
    */
   public void close() throws FullTextStringIndexException {
+    if (logger.isDebugEnabled()) {
+      logger.debug("Closing fulltext indexes");
+    }
+
     try {
       if (indexer != null) {
         indexer.close();
@@ -648,22 +597,14 @@
    * @throws FullTextStringIndexException If there was a problem reading from or writing to the disk.
    */
   public void optimize() throws FullTextStringIndexException {
-    // debug logging
-    if (logger.isDebugEnabled()) {
-      logger.debug("Optimizing fulltext string pool indexes to disk");
+    if (indexer == null) return;
+
+    if (logger.isInfoEnabled()) {
+      logger.info("Optimizing fulltext index at " + indexDirectoryName + " please wait...");
     }
 
     try {
-      //lock any deletes, adds and optimize from the index
-      synchronized (indexLock) {
-        logger.info("Optimizing fulltext index at " + indexDirectoryName + " please wait...");
-
-        //Optimize the indexes
-        indexer.optimize();
-
-        //set the index status to modified
-        indexLock.setStatus(indexLock.MODIFIED);
-      }
+      indexer.optimize();
     } catch (IOException ex) {
       logger.error("Unable to optimize existing fulltext string pool index", ex);
       throw new FullTextStringIndexException("Unable to optimize existing fulltext string pool index", ex);
@@ -733,22 +674,6 @@
         }
       }
 
-      //wait for locks performing deletes, adds and optimizes
-      synchronized (indexLock) {
-        //check the status of the index
-        if (indexLock.getStatus() == indexLock.MODIFIED) {
-          // flush the changes
-          if (indexer != null)
-            indexer.commit();
-
-          // re-open the read index
-          openReadIndex();
-
-          //set the index status to not modified
-          indexLock.setStatus(indexLock.NOT_MODIFIED);
-        }
-      }
-
       //Perform query
       indexSearcher.search(bQuery, hits = new Hits(indexSearcher.getIndexReader()));
 
@@ -791,22 +716,6 @@
         logger.debug("Searching the fulltext string index pool with query " + query.toString(LITERAL_KEY));
       }
 
-      //wait for locks performing deletes, adds and optimizes
-      synchronized (indexLock) {
-        //check the status of the index
-        if (indexLock.getStatus() == indexLock.MODIFIED) {
-          // flush the changes
-          if (indexer != null)
-            indexer.commit();
-
-          // re-open the read index
-          openReadIndex();
-
-          //set the index status to not modified
-          indexLock.setStatus(indexLock.NOT_MODIFIED);
-        }
-      }
-
       //Perform query
       indexSearcher.search(query, hits = new Hits(indexSearcher.getIndexReader()));
     } catch (IOException ex) {
@@ -822,10 +731,11 @@
    * indexes if they do not exist.
    *
    * @param directory Directory of the index to be initialized
+   * @param forWrites Whether to open an index writer
    * @throws FullTextStringIndexException IOException occurs while trying to
    *      locate or create the indexes
    */
-  private void initialize(String directory) throws FullTextStringIndexException {
+  private void initialize(String directory, boolean forWrites) throws FullTextStringIndexException {
     // debug logging
     if (logger.isDebugEnabled()) {
       logger.debug("Initialization of FullTextStringIndex to directory to " + directory);
@@ -859,7 +769,7 @@
     }
 
     // ensure the directory is writeable
-    if (!indexDirectory.canWrite()) {
+    if (forWrites && !indexDirectory.canWrite()) {
       indexDirectory = null;
       logger.fatal("The fulltext string index directory '" + directory + "' is not writeable!");
       throw new FullTextStringIndexException("The fulltext string index directory '" + directory +
@@ -876,25 +786,27 @@
     assert luceneIndexDirectory != null;
 
     // Open the index for writing
-    try {
-      openWriteIndex();
-    } catch (LockObtainFailedException lofe) {
-      logger.warn("Failed to obtain fulltext index directory lock; forcibly unlocking and trying again", lofe);
-
-      /* If it fails once try and unlock the directory and try again. This shouldn't happen
-       * unless mulgara was shut down abruptly since mulgara has a single writer lock.
-       */
+    if (forWrites) {
       try {
-        IndexWriter.unlock(luceneIndexDirectory);
-      } catch (IOException ioe) {
-        throw new FullTextStringIndexException("Failed to unlock directory: " + luceneIndexDirectory, ioe);
-      }
-
-      // Try again - let it fail this time.
-      try {
         openWriteIndex();
-      } catch (LockObtainFailedException lofe2) {
-        throw new FullTextStringIndexException(lofe2);
+      } catch (LockObtainFailedException lofe) {
+        logger.warn("Failed to obtain fulltext index directory lock; forcibly unlocking and trying again", lofe);
+
+        /* If it fails once try and unlock the directory and try again. This shouldn't happen
+         * unless mulgara was shut down abruptly since mulgara has a single writer lock.
+         */
+        try {
+          IndexWriter.unlock(luceneIndexDirectory);
+        } catch (IOException ioe) {
+          throw new FullTextStringIndexException("Failed to unlock directory: " + luceneIndexDirectory, ioe);
+        }
+
+        // Try again - let it fail this time.
+        try {
+          openWriteIndex();
+        } catch (LockObtainFailedException lofe2) {
+          throw new FullTextStringIndexException(lofe2);
+        }
       }
     }
 
@@ -934,11 +846,11 @@
   }
 
   /**
-   * Open the index on disk for reading.
+   * (Re)open the index on disk for reading.
    *
    * @throws FullTextStringIndexException if there is an error whilst opening the index.
    */
-  private void openReadIndex() throws FullTextStringIndexException {
+  void openReadIndex() throws FullTextStringIndexException {
     // debug logging
     if (logger.isDebugEnabled()) {
       logger.debug("Opening index for IndexSearcher");
@@ -969,6 +881,30 @@
     }
   }
 
+  public void prepare() throws IOException {
+    if (logger.isDebugEnabled()) {
+      logger.debug("Preparing fulltext indexes");
+    }
+
+    if (indexer != null) indexer.prepareCommit();
+  }
+
+  public void rollback() throws IOException {
+    if (logger.isDebugEnabled()) {
+      logger.debug("Rolling back fulltext indexes");
+    }
+
+    if (indexer != null) indexer.rollback();
+  }
+
+  public void commit() throws IOException {
+    if (logger.isDebugEnabled()) {
+      logger.debug("Comitting fulltext indexes");
+    }
+
+    if (indexer != null) indexer.commit();
+  }
+
   /**
    * Lucene Hits has been deprecated, so this is our simple version thereof. Since we always
    * read all results, this is more efficient too.
@@ -1011,22 +947,4 @@
       reader.decRef();
     }
   }
-
-  /**
-   * Locking object to stop multiple threads performing inserts, deletes and
-   * optimizations are the same instance.
-   */
-  private static class IndexLock {
-    final static int MODIFIED = 1;
-    final static int NOT_MODIFIED = 0;
-    private int status = NOT_MODIFIED;
-
-    public void setStatus(int status) {
-      this.status = status;
-    }
-
-    public int getStatus() {
-      return status;
-    }
-  }
 }

Modified: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndexUnitTest.java
===================================================================
--- trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndexUnitTest.java	2008-10-14 13:53:01 UTC (rev 1310)
+++ trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/FullTextStringIndexUnitTest.java	2008-10-14 13:53:10 UTC (rev 1311)
@@ -30,6 +30,7 @@
 // Java 2 standard packages
 import java.io.File;
 import java.io.FileInputStream;
+import java.io.FilenameFilter;
 import java.io.InputStreamReader;
 import java.io.IOException;
 import java.io.Reader;
@@ -103,7 +104,6 @@
     TestSuite suite = new TestSuite();
     suite.addTest(new FullTextStringIndexUnitTest("testFullTextStringPool"));
     suite.addTest(new FullTextStringIndexUnitTest("testFullTextStringPoolwithFiles"));
-    suite.addTest(new FullTextStringIndexUnitTest("testWithoutClosing"));
 
     return suite;
   }
@@ -206,7 +206,7 @@
    * @throws Exception Test fails
    */
   public void testFullTextStringPool() throws Exception {
-    FullTextStringIndex index = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
+    FullTextStringIndex index = new FullTextStringIndex(indexDirectory, "fulltextsp", true, true);
 
     try {
       // Ensure that reverse search is enabled.
@@ -217,13 +217,16 @@
       index.removeAllIndexes();
 
       //re-create the index
-      index = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
+      index = new FullTextStringIndex(indexDirectory, "fulltextsp", true, true);
 
       // Add strings to the index
       for (String literal : theStrings) {
         index.add(document, has, literal);
       }
 
+      index.commit();
+      index.openReadIndex();
+
       // Find the strings from the index with both subject & predicate
       for (String literal : theStrings) {
         FullTextStringIndex.Hits hits = index.find(document, has, literal);
@@ -255,6 +258,9 @@
       index.remove(document, has, "one");
       index.remove(document, has, "one two three");
 
+      index.commit();
+      index.openReadIndex();
+
       assertEquals("Presumed deleted but found 'one two'", 0,
                    index.find(document, has, "one two").length());
       assertEquals("Presumed deleted but found 'one'", 0,
@@ -268,10 +274,12 @@
       assertFalse("Adding an empty literal string should fail",
                   index.add("subject","predicate", "  "));
 
-
       assertTrue("Adding a string containing slashes to the fulltext string pool",
                  index.add("subject", "predicate", "this/is/a/slash/test"));
 
+      index.commit();
+      index.openReadIndex();
+
       long returned = index.find(document, has, "?ommittee").length();
       assertEquals("Reverse lookup was expecting 4 documents returned", 4, returned);
 
@@ -292,6 +300,8 @@
 
       // test removing all documents
       index.removeAll();
+      index.commit();
+      index.openReadIndex();
 
       returned = index.find(document, has, "European").length();
       assertEquals("Got unexpected documents after removeAll:", 0, returned);
@@ -314,7 +324,7 @@
    */
   public void testFullTextStringPoolwithFiles() throws Exception {
     // create a new index direcotry
-    FullTextStringIndex index = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
+    FullTextStringIndex index = new FullTextStringIndex(indexDirectory, "fulltextsp", true, true);
 
     try {
       // make sure the index directory is empty
@@ -322,12 +332,16 @@
       assertTrue("Unable to remove all index files", index.removeAllIndexes());
 
       // create a new index
-      index = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
+      index = new FullTextStringIndex(indexDirectory, "fulltextsp", true, true);
 
       logger.debug("Obtaining text text documents from " + textDirectory);
 
       File directory = new File(textDirectory);
-      File[] textDocuments = directory.listFiles();
+      File[] textDocuments = directory.listFiles(new FilenameFilter() {
+        public boolean accept(File dir, String name) {
+          return name.endsWith(".txt");
+        }
+      });
 
       // keep a track of the number of documents added.
       int docsAdded = 0;
@@ -355,6 +369,10 @@
       // check if all text documents were indexed
       assertEquals("Expected 114 text documents to be indexed", 114, docsAdded);
 
+      // commit the new docs
+      index.commit();
+      index.openReadIndex();
+
       // Perform a search for 'supernatural' in the
       // document content predicate
       FullTextStringIndex.Hits hits =
@@ -381,6 +399,10 @@
       // check the document were removed
       assertEquals("Expected 6 documents to be removed'", 6, docsRemoved);
 
+      // commit the removal
+      index.commit();
+      index.openReadIndex();
+
       // Perform a search for 'supernatural' in the
       // document content predicate
       hits = index.find(null, "http://mulgara.org/mulgara/Document#Content", "supernatural");
@@ -397,23 +419,4 @@
       }
     }
   }
-
-  /**
-   * Test creating two text indexes.
-   *
-   * @throws Exception Test fails
-   */
-  public void testWithoutClosing() throws Exception {
-    FullTextStringIndex index, index2, index3, index4;
-
-    index = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
-    index2 = new FullTextStringIndex(indexDirectory2, "fulltextsp2", true);
-    index3 = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
-    index4 = new FullTextStringIndex(indexDirectory2, "fulltextsp2", true);
-
-    index = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
-    index2 = new FullTextStringIndex(indexDirectory2, "fulltextsp2", true);
-    index3 = new FullTextStringIndex(indexDirectory, "fulltextsp", true);
-    index4 = new FullTextStringIndex(indexDirectory2, "fulltextsp2", true);
-  }
 }

Modified: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolver.java
===================================================================
--- trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolver.java	2008-10-14 13:53:01 UTC (rev 1310)
+++ trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolver.java	2008-10-14 13:53:10 UTC (rev 1311)
@@ -37,6 +37,9 @@
 import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URLConnection;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
 import javax.activation.MimeType;
 import javax.activation.MimeTypeParseException;
 import javax.transaction.xa.XAResource;
@@ -57,15 +60,17 @@
 import org.mulgara.query.QueryException;
 import org.mulgara.query.TuplesException;
 import org.mulgara.query.Variable;
-import org.mulgara.resolver.spi.DummyXAResource;
+import org.mulgara.resolver.spi.AbstractXAResource;
+import org.mulgara.resolver.spi.AbstractXAResource.RMInfo;
+import org.mulgara.resolver.spi.AbstractXAResource.TxInfo;
 import org.mulgara.resolver.spi.EmptyResolution;
 import org.mulgara.resolver.spi.GlobalizeException;
 import org.mulgara.resolver.spi.LocalizeException;
 import org.mulgara.resolver.spi.Resolution;
 import org.mulgara.resolver.spi.Resolver;
+import org.mulgara.resolver.spi.ResolverFactory;
+import org.mulgara.resolver.spi.ResolverException;
 import org.mulgara.resolver.spi.ResolverSession;
-import org.mulgara.resolver.spi.ResolverFactoryException;
-import org.mulgara.resolver.spi.ResolverException;
 import org.mulgara.resolver.spi.Statements;
 import org.mulgara.resolver.spi.TuplesWrapperResolution;
 import org.mulgara.store.tuples.Tuples;
@@ -107,6 +112,12 @@
 
   protected final ResolverSession resolverSession;
 
+  protected final boolean forWrites;
+
+  protected final XAResource xares;
+
+  protected final Map<Long, FullTextStringIndex> indexes = new HashMap<Long, FullTextStringIndex>();
+
   //
   // Constructors
   //
@@ -117,9 +128,12 @@
    * @param modelTypeURI     the URI of the lucene model type
    * @param resolverSession  the session this resolver is associated with
    * @param directory        the directory to use for the lucene indexes
+   * @param resolverFactory  the resolver-factory that created us
+   * @param forWrites        whether we may be getting writes
    * @throws IllegalArgumentException  if <var>directory</var> is <code>null</code>
    */
-  LuceneResolver(URI modelTypeURI, ResolverSession resolverSession, String directory) {
+  LuceneResolver(URI modelTypeURI, ResolverSession resolverSession, String directory,
+                 ResolverFactory resolverFactory, boolean forWrites) {
     if (directory == null) {
       throw new IllegalArgumentException("Null directory in LuceneResolver");
     }
@@ -128,6 +142,8 @@
     this.modelTypeURI = modelTypeURI;
     this.directory = directory;
     this.resolverSession = resolverSession;
+    this.forWrites = forWrites;
+    this.xares = new LuceneXAResource(10, resolverFactory, indexes.values());
   }
 
   //
@@ -150,13 +166,8 @@
     }
   }
 
-  /**
-   * @return a {@link DummyXAResource} with a 10 second transaction timeout
-   */
   public XAResource getXAResource() {
-    return new DummyXAResource(
-        10 // seconds before transaction timeout
-        );
+    return xares;
   }
 
   /**
@@ -169,7 +180,7 @@
     }
 
     try {
-      FullTextStringIndex stringIndex = openFullTextStringIndex(model);
+      FullTextStringIndex stringIndex = getFullTextStringIndex(model);
 
       statements.beforeFirst();
       while (statements.next()) {
@@ -311,10 +322,6 @@
           }
         }
       }
-
-      // Flush the index
-      stringIndex.optimize();
-      stringIndex.close();
     } catch (TuplesException et) {
       throw new ResolverException("Error fetching statements", et);
     } catch (GlobalizeException eg) {
@@ -333,10 +340,7 @@
     }
 
     try {
-      FullTextStringIndex stringIndex = openFullTextStringIndex(model);
-      stringIndex.removeAll();
-      stringIndex.optimize();
-      stringIndex.close();
+      getFullTextStringIndex(model).removeAll();
     } catch (FullTextStringIndexException ef) {
       throw new ResolverException("Query failed against string index", ef);
     }
@@ -362,11 +366,10 @@
     }
 
     try {
-      FullTextStringIndex stringIndex = openFullTextStringIndex(((LocalNode)modelElement).getValue());
+      FullTextStringIndex stringIndex = getFullTextStringIndex(((LocalNode)modelElement).getValue());
       Tuples tmpTuples = new FullTextStringIndexTuples(stringIndex, constraint, resolverSession);
       Tuples tuples = TuplesOperations.sort(tmpTuples);
       tmpTuples.close();
-      stringIndex.close();
 
       return new TuplesWrapperResolution(tuples, constraint);
     } catch (TuplesException te) {
@@ -376,9 +379,111 @@
     }
   }
 
-  private FullTextStringIndex openFullTextStringIndex(long model) throws FullTextStringIndexException {
-    return new FullTextStringIndex(new File(directory, Long.toString(model)).toString(), "gn"+model );
+  private FullTextStringIndex getFullTextStringIndex(long model) throws FullTextStringIndexException {
+    FullTextStringIndex index = indexes.get(model);
+    if (index == null) {
+      index = new FullTextStringIndex(new File(directory, Long.toString(model)).toString(), "gn" + model, forWrites);
+      indexes.put(model, index);
+    }
+    return index;
   }
 
-  public void abort() {}
+  public void abort() {
+    try {
+      closeIndexes(indexes.values(), false);
+    } catch (Exception e) {
+      logger.error("Error closing fulltext index", e);
+    }
+  }
+
+  private static void closeIndexes(Collection<FullTextStringIndex> indexes, boolean commit)
+      throws Exception {
+    Exception exc = null;
+
+    for (FullTextStringIndex index : indexes) {
+      try {
+        if (commit) {
+          index.optimize();
+          index.commit();
+        } else {
+          index.rollback();
+        }
+      } catch (Exception e) {
+        if (exc == null)
+          exc = e;
+        else
+          logger.error("Error rolling back fulltext index", e);
+      } finally {
+        try {
+          index.close();
+        } catch (Exception e) {
+          if (exc == null)
+            exc = e;
+          else
+            logger.error("Error closing fulltext index", e);
+        }
+      }
+    }
+
+    if (exc != null)
+      throw exc;
+  }
+
+
+  /**
+   * An XAResource to manage the lucene indexes.
+   */
+  private static class LuceneXAResource
+      extends AbstractXAResource<RMInfo<LuceneXAResource.LuceneTxInfo>,LuceneXAResource.LuceneTxInfo> {
+    /**
+     * Construct a {@link LuceneXAResource} with a specified transaction timeout.
+     *
+     * @param transactionTimeout transaction timeout period, in seconds
+     * @param resolverFactory    the resolver-factory we belong to
+     * @param indexes            the list of lucene indexes
+     */
+    public LuceneXAResource(int transactionTimeout, ResolverFactory resolverFactory, Collection<FullTextStringIndex> indexes) {
+      super(transactionTimeout, resolverFactory, new LuceneTxInfo(indexes));
+    }
+
+    protected RMInfo<LuceneTxInfo> newResourceManager() {
+      return new RMInfo<LuceneTxInfo>();
+    }
+
+    //
+    // Methods implementing XAResource
+    //
+
+    protected void doStart(LuceneTxInfo tx, int flags, boolean isNew) {
+    }
+
+    protected void doEnd(LuceneTxInfo tx, int flags) {
+    }
+
+    protected int doPrepare(LuceneTxInfo tx) throws Exception {
+      for (FullTextStringIndex index : tx.indexes)
+        index.prepare();
+      return XA_OK;
+    }
+
+    protected void doCommit(LuceneTxInfo tx) throws Exception {
+      closeIndexes(tx.indexes, true);
+    }
+
+    protected void doRollback(LuceneTxInfo tx) throws Exception {
+      closeIndexes(tx.indexes, false);
+    }
+
+    protected void doForget(LuceneTxInfo tx) {
+    }
+
+
+    static class LuceneTxInfo extends TxInfo {
+      public Collection<FullTextStringIndex> indexes;
+
+      public LuceneTxInfo(Collection<FullTextStringIndex> indexes) {
+        this.indexes = indexes;
+      }
+    }
+  }
 }

Modified: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverFactory.java
===================================================================
--- trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverFactory.java	2008-10-14 13:53:01 UTC (rev 1310)
+++ trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverFactory.java	2008-10-14 13:53:10 UTC (rev 1311)
@@ -162,7 +162,7 @@
       throws ResolverFactoryException {
     if (logger.isDebugEnabled()) logger.debug("Creating Lucene resolver");
     return canWrite
-      ? new LuceneResolver(modelTypeURI, resolverSession, directory)
-      : new ReadOnlyLuceneResolver(modelTypeURI, resolverSession, directory);
+      ? new LuceneResolver(modelTypeURI, resolverSession, directory, this, true)
+      : new ReadOnlyLuceneResolver(modelTypeURI, resolverSession, directory, this);
   }
 }

Added: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverUnitTest.java
===================================================================
--- trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverUnitTest.java	                        (rev 0)
+++ trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverUnitTest.java	2008-10-14 13:53:10 UTC (rev 1311)
@@ -0,0 +1,488 @@
+/*
+ * Copyright 2008 The Topaz Foundation 
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not
+ * use this file except in compliance with the License. You may obtain a copy of
+ * the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ *
+ * Contributions:
+ */
+
+package org.mulgara.resolver.lucene;
+
+import java.io.File;
+import java.io.StringWriter;
+import java.io.PrintWriter;
+import java.net.URI;
+
+import javax.transaction.xa.XAResource;
+import javax.transaction.xa.Xid;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import org.apache.log4j.Logger;
+import org.mulgara.itql.TqlInterpreter;
+import org.mulgara.query.Answer;
+import org.mulgara.query.ModelResource;
+import org.mulgara.query.Query;
+import org.mulgara.query.operation.Modification;
+import org.mulgara.query.rdf.Mulgara;
+import org.mulgara.query.rdf.URIReferenceImpl;
+import org.mulgara.resolver.Database;
+import org.mulgara.resolver.JotmTransactionManagerFactory;
+import org.mulgara.server.Session;
+import org.mulgara.util.FileUtil;
+
+/**
+ * Unit tests for the lucene resolver.
+ *
+ * @created 2008-10-13
+ * @author Ronald Tschalär
+ * @copyright &copy; 2008 <a href="http://www.topazproject.org/">The Topaz Project</a>
+ * @licence Apache License v2.0
+ */
+public class LuceneResolverUnitTest extends TestCase {
+  private static final Logger logger = Logger.getLogger(LuceneResolverUnitTest.class);
+
+  private static final URI databaseURI = URI.create("local:database");
+  private static final URI modelURI = URI.create("local:lucene");
+  private static final URI luceneModelType = URI.create(Mulgara.NAMESPACE + "LuceneModel");
+  private final static String textDirectory =
+      System.getProperty("cvs.root") + File.separator + "data" + File.separator + "fullTextTestData";
+
+  private static Database database = null;
+  private static TqlInterpreter ti = null;
+
+  public LuceneResolverUnitTest(String name) {
+    super(name);
+  }
+
+  public static Test suite() {
+    TestSuite suite = new TestSuite();
+    suite.addTest(new LuceneResolverUnitTest("testConcurrentQuery"));
+    suite.addTest(new LuceneResolverUnitTest("testConcurrentReadTransction"));
+    suite.addTest(new LuceneResolverUnitTest("testTransactionIsolation"));
+
+    return suite;
+  }
+
+  /**
+   * Create test objects.
+   */
+  public void setUp() throws Exception {
+    if (database == null) {
+      // Create the persistence directory
+      File persistenceDirectory = new File(new File(System.getProperty("cvs.root")), "testDatabase");
+      if (persistenceDirectory.isDirectory()) {
+        if (!FileUtil.deleteDirectory(persistenceDirectory)) {
+          throw new RuntimeException("Unable to remove old directory " + persistenceDirectory);
+        }
+      }
+      if (!persistenceDirectory.mkdirs()) {
+        throw new Exception("Unable to create directory " + persistenceDirectory);
+      }
+
+      // Define the the node pool factory
+      String nodePoolFactoryClassName = "org.mulgara.store.stringpool.xa11.XA11StringPoolFactory";
+
+      // Define the string pool factory
+      String stringPoolFactoryClassName = "org.mulgara.store.stringpool.xa11.XA11StringPoolFactory";
+
+      String tempNodePoolFactoryClassName = "org.mulgara.store.nodepool.memory.MemoryNodePoolFactory";
+
+      // Define the string pool factory
+      String tempStringPoolFactoryClassName = "org.mulgara.store.stringpool.memory.MemoryStringPoolFactory";
+
+      // Define the resolver factory used to manage system models
+      String systemResolverFactoryClassName = "org.mulgara.resolver.store.StatementStoreResolverFactory";
+
+      // Define the resolver factory used to manage system models
+      String tempResolverFactoryClassName = "org.mulgara.resolver.memory.MemoryResolverFactory";
+
+      // Create a database which keeps its system models on the Java heap
+      database = new Database(
+                   databaseURI,
+                   persistenceDirectory,
+                   null,                            // no security domain
+                   new JotmTransactionManagerFactory(),
+                   0,                               // default transaction timeout
+                   0,                               // default idle timeout
+                   nodePoolFactoryClassName,        // persistent
+                   new File(persistenceDirectory, "xaNodePool"),
+                   stringPoolFactoryClassName,      // persistent
+                   new File(persistenceDirectory, "xaStringPool"),
+                   systemResolverFactoryClassName,  // persistent
+                   new File(persistenceDirectory, "xaStatementStore"),
+                   tempNodePoolFactoryClassName,    // temporary nodes
+                   null,                            // no dir for temp nodes
+                   tempStringPoolFactoryClassName,  // temporary strings
+                   null,                            // no dir for temp strings
+                   tempResolverFactoryClassName,    // temporary models
+                   null,                            // no dir for temp models
+                   "",                              // no rule loader
+                   "org.mulgara.content.n3.N3ContentHandler");
+
+      database.addResolverFactory("org.mulgara.resolver.lucene.LuceneResolverFactory", persistenceDirectory);
+
+      ti = new TqlInterpreter();
+    }
+  }
+
+
+  /**
+   * The teardown method for JUnit
+   */
+  public void tearDown() {
+  }
+
+  /**
+   * Two queries, in parallel.
+   */
+  public void testConcurrentQuery() throws Exception {
+    logger.info("Testing concurrentQuery");
+
+    try {
+      // Load some test data
+      Session session = database.newSession();
+
+      URI fileURI = new File(textDirectory + File.separator + "data.n3").toURI();
+      if (session.modelExists(modelURI)) {
+        session.removeModel(modelURI);
+      }
+      session.createModel(modelURI, luceneModelType);
+      session.setModel(modelURI, new ModelResource(fileURI));
+
+      // Run the queries
+      try {
+        String q = "select $x from <foo:bar> where $x <foo:hasText> 'American' in <" + modelURI + ">;";
+        Query qry1 = (Query) ti.parseCommand(q);
+        Query qry2 = (Query) ti.parseCommand(q);
+
+        Answer answer1 = session.query(qry1);
+        Answer answer2 = session.query(qry1);
+
+        compareResults(answer1, answer2);
+
+        answer1.close();
+        answer2.close();
+      } finally {
+        session.close();
+      }
+    } catch (Exception e) {
+      fail(e);
+    }
+  }
+
+  /**
+   * Two queries, in concurrent transactions.
+   */
+  public void testConcurrentReadTransction() throws Exception {
+    logger.info("Testing concurrentReadTransaction");
+
+    try {
+      Session session1 = database.newSession();
+      try {
+        XAResource resource1 = session1.getReadOnlyXAResource();
+        Xid xid1 = new TestXid(1);
+        resource1.start(xid1, XAResource.TMNOFLAGS);
+
+        final boolean[] flag = new boolean[] { false };
+
+        Thread t2 = new Thread("tx2Test") {
+          public void run() {
+            try {
+              Session session2 = database.newSession();
+              try {
+                XAResource resource2 = session2.getReadOnlyXAResource();
+                Xid xid2 = new TestXid(2);
+                resource2.start(xid2, XAResource.TMNOFLAGS);
+
+                synchronized (flag) {
+                  flag[0] = true;
+                  flag.notify();
+                }
+
+                // Evaluate the query
+                String q = "select $x from <foo:bar> where $x <foo:hasText> 'Study' in <" + modelURI + ">;";
+                Answer answer = session2.query((Query) ti.parseCommand(q));
+
+                compareResults(expectedStudyResults(), answer);
+                answer.close();
+
+                synchronized (flag) {
+                  while (flag[0])
+                    flag.wait();
+                }
+
+                resource2.end(xid2, XAResource.TMSUCCESS);
+                resource2.commit(xid2, true);
+              } finally {
+                session2.close();
+              }
+            } catch (Exception e) {
+              fail(e);
+            }
+          }
+        };
+        t2.start();
+
+        synchronized (flag) {
+          if (!flag[0]) {
+            try {
+              flag.wait(2000L);
+            } catch (InterruptedException ie) {
+              logger.error("wait for tx2-started interrupted", ie);
+              fail(ie);
+            }
+          }
+          assertTrue("second transaction should have proceeded", flag[0]);
+        }
+
+        String q = "select $x from <foo:bar> where $x <foo:hasText> 'Group' in <" + modelURI + ">;";
+        Answer answer = session1.query((Query) ti.parseCommand(q));
+
+        compareResults(expectedGroupResults(), answer);
+        answer.close();
+
+        synchronized (flag) {
+          flag[0] = false;
+          flag.notify();
+        }
+
+        try {
+          t2.join(2000L);
+        } catch (InterruptedException ie) {
+          logger.error("wait for tx2-terminated interrupted", ie);
+          fail(ie);
+        }
+        assertFalse("second transaction should've terminated", t2.isAlive());
+
+        resource1.end(xid1, XAResource.TMSUCCESS);
+        resource1.commit(xid1, true);
+      } finally {
+        session1.close();
+      }
+    } catch (Exception e) {
+      fail(e);
+    }
+  }
+
+  /**
+   * Two concurrent transactions, one reader, one writer. Verify transaction isolation.
+   */
+  public void testTransactionIsolation() throws Exception {
+    logger.info("Testing transactionIsolation");
+
+    try {
+      Session session1 = database.newSession();
+      try {
+        // start read-only txn
+        XAResource resource1 = session1.getReadOnlyXAResource();
+        Xid xid1 = new TestXid(1);
+        resource1.start(xid1, XAResource.TMNOFLAGS);
+
+        // run query before second txn starts
+        String q = "select $x from <foo:bar> where $x <foo:hasText> 'Group' in <" + modelURI + ">;";
+        Answer answer = session1.query((Query) ti.parseCommand(q));
+
+        compareResults(expectedGroupResults(), answer);
+        answer.close();
+
+        // run a second transaction that writes new data
+        final boolean[] flag = new boolean[] { false };
+
+        Thread t2 = new Thread("tx2Test") {
+          public void run() {
+            try {
+              Session session2 = database.newSession();
+              try {
+                XAResource resource2 = session2.getXAResource();
+                Xid xid2 = new TestXid(2);
+                resource2.start(xid2, XAResource.TMNOFLAGS);
+
+                synchronized (flag) {
+                  flag[0] = true;
+                  flag.notify();
+
+                  while (flag[0])
+                    flag.wait();
+                }
+
+                String q = "insert <foo:nodeX> <foo:hasText> 'Another Group text' into <" + modelURI + ">;";
+                session2.insert(modelURI, ((Modification) ti.parseCommand(q)).getStatements());
+
+                synchronized (flag) {
+                  flag[0] = true;
+                  flag.notify();
+
+                  while (flag[0])
+                    flag.wait();
+                }
+
+                resource2.end(xid2, XAResource.TMSUCCESS);
+                resource2.commit(xid2, true);
+              } finally {
+                session2.close();
+              }
+            } catch (Exception e) {
+              fail(e);
+            }
+          }
+        };
+        t2.start();
+
+        // wait for 2nd txn to have started
+        synchronized (flag) {
+          while (!flag[0])
+            flag.wait();
+        }
+
+        // run query before insert
+        answer = session1.query((Query) ti.parseCommand(q));
+        compareResults(expectedGroupResults(), answer);
+        answer.close();
+
+        // wait for insert to complete
+        synchronized (flag) {
+          flag[0] = false;
+          flag.notify();
+
+          while (!flag[0])
+            flag.wait();
+        }
+
+        // run query after insert and before commit
+        answer = session1.query((Query) ti.parseCommand(q));
+        compareResults(expectedGroupResults(), answer);
+        answer.close();
+
+        // wait for commit to complete
+        synchronized (flag) {
+          flag[0] = false;
+          flag.notify();
+        }
+
+        try {
+          t2.join(2000L);
+        } catch (InterruptedException ie) {
+          logger.error("wait for tx2-terminated interrupted", ie);
+          fail(ie);
+        }
+        assertFalse("second transaction should've terminated", t2.isAlive());
+
+        // run query after commit
+        answer = session1.query((Query) ti.parseCommand(q));
+        compareResults(expectedGroupResults(), answer);
+        answer.close();
+
+        // clean up
+        resource1.end(xid1, XAResource.TMSUCCESS);
+        resource1.commit(xid1, true);
+
+        // start new tx - we should see new data now
+        xid1 = new TestXid(3);
+        resource1.start(xid1, XAResource.TMNOFLAGS);
+
+        answer = session1.query((Query) ti.parseCommand(q));
+        compareResults(concat(expectedGroupResults(), new String[][] { { "foo:nodeX" } }), answer);
+        answer.close();
+
+        resource1.end(xid1, XAResource.TMSUCCESS);
+        resource1.commit(xid1, true);
+      } finally {
+        session1.close();
+      }
+    } catch (Exception e) {
+      fail(e);
+    }
+  }
+
+  private String[][] expectedStudyResults() {
+    return new String[][] {
+        { "foo:node3" }, { "foo:node4" }, { "foo:node11" }, { "foo:node13" }, { "foo:node19" },
+        { "foo:node22" },
+    };
+  }
+
+  private String[][] expectedGroupResults() {
+    return new String[][] {
+        { "foo:node1" }, { "foo:node2" }, { "foo:node4" }, { "foo:node9" }, { "foo:node11" },
+        { "foo:node12" }, { "foo:node13" }, { "foo:node14" }, { "foo:node18" },
+    };
+  }
+
+  private static String[][] concat(String[][] a1, String[][] a2) {
+    String[][] res = new String[a1.length + a2.length][];
+    System.arraycopy(a1, 0, res, 0, a1.length);
+    System.arraycopy(a2, 0, res, a1.length, a2.length);
+    return res;
+  }
+
+  private void compareResults(String[][] expected, Answer answer) throws Exception {
+    try {
+      answer.beforeFirst();
+      for (int i = 0; i < expected.length; i++) {
+        assertTrue("Answer short at row " + i, answer.next());
+        assertEquals(expected[i].length, answer.getNumberOfVariables());
+        for (int j = 0; j < expected[i].length; j++) {
+          URIReferenceImpl uri = new URIReferenceImpl(new URI(expected[i][j]));
+          assertEquals(uri, answer.getObject(j));
+        }
+      }
+      assertFalse(answer.next());
+    } catch (Exception e) {
+      logger.error("Failed test - " + answer);
+      answer.close();
+      throw e;
+    }
+  }
+
+  private void compareResults(Answer answer1, Answer answer2) throws Exception {
+    answer1.beforeFirst();
+    answer2.beforeFirst();
+    assertEquals(answer1.getNumberOfVariables(), answer2.getNumberOfVariables());
+    while (answer1.next()) {
+      assertTrue(answer2.next());
+      for (int i = 0; i < answer1.getNumberOfVariables(); i++) {
+        assertEquals(answer1.getObject(i), answer2.getObject(i));
+      }
+    }
+    assertFalse(answer2.next());
+  }
+
+  private void fail(Throwable throwable) {
+    StringWriter stringWriter = new StringWriter();
+    throwable.printStackTrace(new PrintWriter(stringWriter));
+    fail(stringWriter.toString());
+  }
+
+  private static class TestXid implements Xid {
+    private final int xid;
+
+    public TestXid(int xid) {
+      this.xid = xid;
+    }
+
+    public int getFormatId() {
+      return 'X';
+    }
+
+    public byte[] getBranchQualifier() {
+      return new byte[] { (byte)(xid >> 0x00), (byte)(xid >> 0x08) };
+    }
+
+    public byte[] getGlobalTransactionId() {
+      return new byte[] { (byte)(xid >> 0x10), (byte)(xid >> 0x18) };
+    }
+  }
+}


Property changes on: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/LuceneResolverUnitTest.java
___________________________________________________________________
Name: svn:keywords
   + Id HeadURL Revision
Name: svn:eol-style
   + native

Modified: trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/ReadOnlyLuceneResolver.java
===================================================================
--- trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/ReadOnlyLuceneResolver.java	2008-10-14 13:53:01 UTC (rev 1310)
+++ trunk/src/jar/resolver-lucene/java/org/mulgara/resolver/lucene/ReadOnlyLuceneResolver.java	2008-10-14 13:53:10 UTC (rev 1311)
@@ -70,11 +70,13 @@
    * @param modelTypeURI     the URI of the lucene model type
    * @param resolverSession  the session this resolver is associated with
    * @param directory        the directory to use for the lucene indexes
+   * @param resolverFactory  the resolver-factory that created us
    * @throws IllegalArgumentException  if <var>directory</var> is <code>null</code>
    */
-  ReadOnlyLuceneResolver(URI modelTypeURI, ResolverSession resolverSession, String directory)
+  ReadOnlyLuceneResolver(URI modelTypeURI, ResolverSession resolverSession, String directory,
+                         ResolverFactory resolverFactory)
       throws ResolverFactoryException {
-    super(modelTypeURI, resolverSession, directory);
+    super(modelTypeURI, resolverSession, directory, resolverFactory, false);
   }
 
   //




More information about the Mulgara-svn mailing list