-
Feature Request
-
Resolution: Done
-
Critical
-
8.12.11.6_4
-
None
Need to enable the materialization process to load the cache "offline" and not impact access to current cache and data. This would be similar to how the 8.12.x version was implemented and similar to how RDBMS materialization is done.
[TEIID-4977] Support materialization as the 8.12.x version did
Van Halbert <vhalbert@redhat.com> changed the Status of bug 1492798 from MODIFIED to ON_QA
Van Halbert <vhalbert@redhat.com> changed the Status of bug 1492798 from NEW to ASSIGNED
Quikstarts have been updated to support JDG 76.1 and removed the quickstarts that supported JDG 6.
Cool, the next step is when the dynamic cache creation is available in Infinispan, make this as simple as current internal materialization, where one would need to flip the flag, with no other configuration.
Changing to using only the clustered configuration fixed the issue. The clarification on the syntax order and the changes to clear the cache. Testing looks good.
Also tested scaling JDG down to zero and restarting, multiple times. Materialization restarted, as expected, and querying continued without having to reconnect.
Regarding the 'renaming' process, when the cache names are swapped, the 'ST' cache is cleared. Immediately reducing the total memory footprint in JDG/infinispan.
Format for -vdb.xml for the BEFORE and AFTER:
"teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT" 'execute {modelname}.native(''truncate {modelName}.{ST}'');', "teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT" 'execute {modelname}native(''rename {modelName}.{ST} {modelName}.{MV}'');',
Example:
"teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT" 'execute StockJDGSource.native(''truncate StockJDGSource.ST_StockCache'');', "teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT" 'execute StockJDGSource.native(''rename StockJDGSource.ST_StockCache StockJDGSource.StockCache'');',
The issue was resolved by change JDG to use the clustered configuration with the following caches:
<replicated-cache name="teiid-alias-naming-cache" configuration="replicated"/> <distributed-cache name="stockCache" /> <distributed-cache name="st_stockCache" />
Since all use cases will be based on clustering, the use of JDG/infinispan in standalone will no longer be considered.
One modification
SWAP MV ST
command changed to
rename ST MV
make sure the staging table is first, as after rename the old contents are purged. I updated vdb.xml files attached
I actually removed the caches in the JDG standalone config, restarted JDG, then readded the caches to check.
There appears to be an issue with the registering of descriptors, getting this error:
11:19:05,827 INFO [org.teiid.MATVIEWS] (Worker0_QueryProcessorQueue14) qBavvKZfOH1H Materialization of view StocksMatView.Stock started.
11:19:05,952 ERROR [org.teiid.CONNECTOR] (Worker3_QueryProcessorQueue29) qBavvKZfOH1H Connector worker process failed for atomic-request=qBavvKZfOH1H.0.102.7: java.lang.IllegalArgumentException: Unknown type name : StockJDGSource.ST_StockCache
at org.infinispan.protostream.impl.SerializationContextImpl.getTypeIdByName(SerializationContextImpl.java:370)
at org.infinispan.protostream.WrappedMessage.writeMessage(WrappedMessage.java:199)
at org.infinispan.protostream.ProtobufUtil.toWrappedByteArray(ProtobufUtil.java:131)
at org.infinispan.query.remote.client.BaseProtoStreamMarshaller.objectToBuffer(BaseProtoStreamMarshaller.java:56)
at org.infinispan.commons.marshall.AbstractMarshaller.objectToByteBuffer(AbstractMarshaller.java:70)
at org.infinispan.client.hotrod.impl.RemoteCacheImpl.obj2bytes(RemoteCacheImpl.java:496)
at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:268)
at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79)
at org.teiid.translator.infinispan.hotrod.InfinispanUpdateExecution.execute(InfinispanUpdateExecution.java:179)
at org.teiid.dqp.internal.datamgr.ConnectorWorkItem$1.execute(ConnectorWorkItem.java:399)
at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:361)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
attached is the vdb.
vhalbert2 See attached file for the example materialization that matches with the pre 9.3 methods with use of staging tables.
You would need 3 different kinds of caches for this solution to work similar to previous JDG based example, but note the approach may be different entirely. You need 3 caches
- MV (materializing for cached table, this name can be anything)
- ST (staging per cached table, this name can be anything)
- teiid-alias-naming-cache (cluster wide internal cache to keep track of aliased tables, must match the name, create as replicated cache. This can be pre-configured with any Teiid deployment and keep user based configuration simple.
When you enable "DirectQueryProcedure", the translator will start looking for "teiid-alias-naming-cache". If this cache is available then the aliased table feature will kick in. Also, when "DirectQueryProcedure" is enabled, there are two commands supported through this mechnisam
- truncate <full-qualified-table-name> ==> clears the whole cache
- swap <from-table> <to-table> ==> swaps the tables from from 2 to, to 2 from
An example VDB attached, can take look and test it, and give me feedback. Also, I will let you document this for 6.4
Here's what I was thinking:
- configure 2 tables on the model, where each table configures its cache name
- on the RA, configure the name of the cache used for managing the alias cache names, where table cache name ==> cache name in use
- could copy the same procedure logic for swapping and truncating the cache, and swapping the assigned names in the alias cache
- the RA, when a alias cache is defined, would get the cache name to use from reading alias cache, otherwise, use whats passed in
questions:
- how to initialize the alias cache with the defaults
could use: OPTIONS(UPDATABLE 'TRUE', "OBJECT_NS:primary_table" 'MCE_AwardBookingClassJDGSource.MCE_AwardBookingClass');
and this could be used to initial the alias cache defaults, so that you know which to 2 tables are linked
just thinking.
I think this would be a bigger change. May need to give more thought how this can be done.
Can the 8.12.x logic, that Steve implemented, be used in this case, where 3 caches are used to accomplish this scenario?
Jan Stastny <jstastny@redhat.com> changed the Status of bug 1492798 from ON_QA to VERIFIED