Quantcast
Channel: Archives des Documentum - dbi Blog
Viewing all 173 articles
Browse latest View live

Documentum DQMan for repositories 16.7 and after

$
0
0

You may have noticed when upgrading your documentum infra that after 7.3 the DQMan utility from FME is no more usable (you get a red cross without any warnings on the screen). In this blog I’ll show you how to get it working again. I tested it on 16.7 but it should work on the latest ones.

Install Java 32bit

DQMan will only work with Java 32bit as it’s not compiled (for the moment) in x64.

On my case I installed Java 8u192 in C:\jre-1.8.192_x32.

Install DFC

Install DFC with the version you need, I have a repository in 16.7 so I installed DFC in 16.7. It’s not mandatory to install the DFC with the 32bits java, it doesn’t prevent DQMan to start. Fill up the dfc.properties which will connect to the repository.

Install DQMan

You can install it wherever you want, in my case I’ve put it in C:\Users\Public\dqman. I’ve the latest dqman version which is currently 6.0.1.34 (you can verify in the about).

Setup DQMan

All is now installed and we can setup dqman to work with our repository.

First you need dmcl.dll from an earlier version, put it into the root folder of dqman, e.g. C:\Users\Public\dqman. Mine is 156KB.

Now copy the dfc.properties from your DFC installation to the config folder of dqman: C:\Users\Public\dqman\config\dfc.properties

If you have a dmcl40.dll in the root folder delete it.

Then create a java.ini file in the root folder with the following inside:

java_library_path="C:\jre-1.8.192_x32\bin\client\jvm.dll"
JAVA_OPTIONS=" -Xcheck:jni -XX:+RestoreMXCSROnJNICalls -Xmx256m"
java_classpath = "C:\Documentum\dctm.jar;C:\Documentum\config;C:\Documentum\Shared\dfc.jar;C:\Users\Public\dqman\config"

Of course update the path depending on your installation.

You should now be able to launch it, you can verify the version of the DFC used in About:

Cet article Documentum DQMan for repositories 16.7 and after est apparu en premier sur Blog dbi services.


Documentum – Custom facets not showing up after full reindex?

$
0
0

Beginning of the year, while performing a migration from a Documentum 7.3 environment on VM to Documentum 16.4 on Kubernetes, a customer had an issue where their custom facets weren’t showing up on D2 after a full reindex. At the end of the migration, since xPlore has been upgraded as well (from xPlore 1.5 to 16.4, from VM to K8s), then a full reindex has been executed so that all the documents are indexed. In this case, it was several millions of documents that were indexed and it took a few days. Unfortunately, at the end of the full reindex, the customer saw that the facets weren’t working…

Why is that exactly? Well, while configuring custom facets, you will need to add subpath configuration for the facet computing and that is a schema change inside the index. Each and every schema change requires at the very least an online rebuild of the index so that the change of the schema is propagated into each and every node of the index. Unless you are doing this online rebuild, the xPlore index schema will NOT be refreshed and the indexing of documents will therefore use the old schema. In case you are wondering what is the “online rebuild” I’m talking about, it’s the action behind the button “Rebuild Index” that you can find in the Dsearch Admin UI under “Home >> Data Management >> <DOMAIN_NAME> (usually Repo name) >> <COLLECTION_NAME> (e.g.: default or Node1_CPS1 or Node4_CPS2 …)“:

This action will not index any new content, it will however create a new index based on the refreshed schema and then copy all the nodes from the current index to the new one. At the end, it will replace the current index with the new one and this can be done online without downtime. This button was initially present for both Data collections (where your documents are) as well as ApplicationInfo collections (ACLs, Groups). However in recent versions of xPlore (at least since 16.4), the feature has been removed for the ApplicationInfo collections.

 

So, what is the minimum required to configure custom facets? The answer is that it depends… :). Here are some examples:

  • If the xPlore has never been started, the index doesn’t exist yet and therefore configuring the facets inside the indexserverconfig.xml file would take effect immediately at the first startup. In this case, an online rebuild wouldn’t even be needed. However, it might not always be easy to modify the indexserverconfig.xml file before xPlore even starts; it depends on how you are deploying the components…
  • If the xPlore has been started at least once but indexing hasn’t started yet (0 content inside the Data collections), then you can just login to the Dsearch Admin UI and perform the online rebuild on the empty collections. This will be almost instantaneous so you will most probably not even see it happen though.
    • If this is a new environment, then make sure the IndexAgent is started in normal mode after that so that it will process incoming indexing requests and that’s it
    • If this is an existing environment, then you will need to execute a full reindex operation using your preferred choice (IndexAgent full reindex action, through some select queries, through the ids.txt)
  • If the xPlore has been started at least once and the indexing has been completed, then you will need to perform the online rebuild as well. However, this time, it will take probably quite some time because as I mentioned earlier, it needs to copy all the indexed nodes to a new index. This process is normally faster than a full reindex because it’s only xPlore internal communications, because it only duplicates the existing index (and applied schema change) and because there is no exchange with the Content Server. Once the online rebuild has been performed, then the facets should be available.

 

Even if an online rebuild is faster than a full reindex, based on the size of the index, it might still take from hours to days to complete. It is therefore quite important to plan this properly in advance in case of migration or upgrade so that you can start with an online rebuild on an empty index (therefore instantaneously done) and then perform the needed full reindex after, instead of the opposite. This might save you several days of pain with your users and considerably reduce the load on the Dsearch/CPS.

This behavior wasn’t really well documented before. I had some exchange with OpenText on this topic and they created the KB15765485 based on these exchanges and also based on what is described in this blog. I’m not sure if that is really better now but at least there is a little bit more information.

 

Cet article Documentum – Custom facets not showing up after full reindex? est apparu en premier sur Blog dbi services.

Documentum – xPlore online rebuild stopped because of “immense term”

$
0
0

In relation to my previous blog about custom facets not showing up after full reindex, a customer was doing a migration that just completed. After the full reindex, there were no facets because of what I explained in the blog. Knowing that the online rebuild is normally faster than a full reindex, I helped to start this operation but after a little bit more than a day of processing, it failed on a document. The online rebuild operation is something really useful on xPlore and it’s something that I found pretty robust since it usually works quite well.

The online rebuild stopped with the following error on the dsearch.log:

2020-01-21 17:53:44,853 WARN [Index-Rebuilder-default-0-Worker-0] c.e.d.c.f.indexserver.core.index.plugin.CPSPlugin - Content Processing Service failed for [090f1234800d647e] with error code [7] and message [Communication error while processing req 090f1234800d647e]
2020-01-21 17:53:45,758 WARN [Index-Rebuilder-default-0] c.e.d.c.f.i.core.collection.FtReindexTask - Reindex for index default.dmftdoc failed
com.emc.documentum.core.fulltext.common.exception.IndexServerException: java.lang.IllegalArgumentException: Document contains at least one immense term in field="<>/dmftcontents<0>/dmftcontent<0>/ tkn" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[109, 97, 115, 116, 101, 114, 102, 105, 108, 101, 32, 112, 115, 117, 114, 32, 99, 97, 115, 101, 32, 114, 101, 118, 105, 101, 119, 32, 32, 32]...', original message: bytes can be at most 32766 in length; got 39938386
	at com.emc.documentum.core.fulltext.indexserver.core.collection.ESSCollection.recreatePathIndexNB(ESSCollection.java:3391)
	at com.emc.documentum.core.fulltext.indexserver.core.collection.ESSCollection.reindexNB(ESSCollection.java:1360)
	at com.emc.documentum.core.fulltext.indexserver.core.collection.ESSCollection.reindex(ESSCollection.java:1249)
	at com.emc.documentum.core.fulltext.indexserver.core.collection.FtReindexTask.run(FtReindexTask.java:204)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Document contains at least one immense term in field="<>/dmftcontents<0>/dmftcontent<0>/ tkn" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[109, 97, 115, 116, 101, 114, 102, 105, 108, 101, 32, 112, 115, 117, 114, 32, 99, 97, 115, 101, 32, 114, 101, 118, 105, 101, 119, 32, 32, 32]...', original message: bytes can be at most 32766 in length; got 39938386
	at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:687)
	at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359)
	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318)
	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:465)
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1526)
	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1252)
	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1234)
	at com.xhive.xDB_10_7_r4498571.xo.addEntry(xdb:156)
	at com.xhive.xDB_10_7_r4498571.qo.a(xdb:194)
	at com.xhive.xDB_10_7_r4498571.qo.a(xdb:187)
	at com.xhive.core.index.ExternalIndex.add(xdb:368)
	at com.xhive.core.index.XhiveIndex.a(xdb:321)
	at com.xhive.core.index.XhiveIndex.a(xdb:330)
	at com.xhive.xDB_10_7_r4498571.eq$b$1.a(xdb:142)
	at com.xhive.xDB_10_7_r4498571.bo$a.a(xdb:58)
	at com.xhive.xDB_10_7_r4498571.bo$f.a(xdb:86)
	at com.xhive.xDB_10_7_r4498571.eq$b.a(xdb:126)
	at com.xhive.core.index.PathValueIndexModifier.a(xdb:335)
	at com.xhive.core.index.PathValueIndexModifier.b(xdb:291)
	at com.xhive.core.index.PathValueIndexModifier.a(xdb:279)
	at com.xhive.core.index.PathValueIndexModifier.d(xdb:514)
	at com.xhive.core.index.PathValueIndexModifier.a(xdb:456)
	at com.xhive.core.index.PathValueIndexModifier.a(xdb:435)
	at com.xhive.core.index.PathValueIndexModifier.a(xdb:414)
	at com.xhive.core.index.PathValueIndexModifier.b(xdb:403)
	at com.xhive.core.index.PathValueIndexModifier.a(xdb:397)
	at com.xhive.xDB_10_7_r4498571.ca.a(xdb:666)
	at com.xhive.xDB_10_7_r4498571.ca.a(xdb:504)
	at com.xhive.xDB_10_7_r4498571.ca.a(xdb:494)
	at com.xhive.xDB_10_7_r4498571.ca.a(xdb:362)
	at com.xhive.xDB_10_7_r4498571.ca.a(xdb:213)
	at com.xhive.xDB_10_7_r4498571.ca.a(xdb:179)
	at com.xhive.core.index.XhiveIndexInConstruction.indexNext(xdb:199)
	at com.emc.documentum.core.fulltext.indexserver.core.collection.ESSCollection.reindexByWorker(ESSCollection.java:3538)
	at com.emc.documentum.core.fulltext.indexserver.core.collection.FtReindexTask$ReindexWorker.run(FtReindexTask.java:91)
	... 1 common frames omitted
Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 39938386
	at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
	at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
	at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:663)
	... 36 common frames omitted

 

I don’t remember seeing this error before related to Documentum but I did see something similar on another Lucene based engine and as you can see in the exception stack, it seems to be linked to Lucene indeed… Anyway, I tried to start again the online rebuild but it failed on the exact same document. I wasn’t sure if this was a document issue or some kind of bug in xPlore so I opened the SR#4481792 and in the meantime did some checks. On the current index, I could display the dmftxml content of any random documents in less than a second, except for this specific document where it was just loading forever. Since the availability of the facets was rather time sensitive, I removed this specific document from the index using the “deleteDocs.sh” script and started again the online rebuild… However, it failed again on a second document.

The error above was happening for at least two documents but it might have been much more. Trials and errors by deleting impacted documents and restarting the online rebuild could have taken ages potentially. I was certain that the full reindex would complete for the millions of documents in a couple days because it happened just before. Therefore, instead of continuing to perform the online rebuild, which could have failed dozens of times on wrong documents, I choose another approach:

  • Delete the Data collections containing the indexed documents
    • Navigate to: Home >> Data Management >> <DOMAIN_NAME> (usually Repo name)
    • Delete the collection(s) with Category=dftxml and Usage=Data using the Red Cross on the right side of the table
  • Re-create the needed collections with the same parameters
    • Still under: Home >> Data Management >> <DOMAIN_NAME> (usually Repo name)
    • Click on: New Collection
    • Set the Name to: <COLLECTION_NAME> (e.g.: default or Node1_CPS1 or Node4_CPS2 …)
    • Set the Usage to: Data
    • Set the Document Category to: dftxml
    • Set the Binding Instance to the Dsearch which should be used, probably PrimaryDsearch
    • Select the correct location to use. If you select the “Same location as domain”, it will put the new collection as usual on your domain data folder. If you want to use another location, select the checkbox and pick the correct one: in this case, you must have already created in advance the needed storage location (“Home >> System Overview >> Global Configuration >> Storage Location“)
  • Perform the online rebuild (as mentioned above) on the empty collections (instantaneous)
  • Perform the full reindex

Doing the above will remove all indexed documents, meaning that searches will not return anything anymore, which is worse that just not having facets from a user’s perspective. However, it was just before the week-end so it was fine in this case for the end-users and at least this completely solved the issue and the facets were available on the next Monday morning. With the full reindex logs and some smart processing (I tried to give some example on this blog), I could find the list of all documents that had the above issue… In the end, it was really a document content issue and nothing related to xPlore. As mentioned on the previous blog, I had some exchange with OpenText on this topic and they created the KB15765485 based on these exchanges. It’s not exactly the procedure that I applied since I did it on the Dsearch Admin UI but the result should be the same to cleanup the index. As one would say, all roads lead to Rome… 😉

 

Cet article Documentum – xPlore online rebuild stopped because of “immense term” est apparu en premier sur Blog dbi services.

Documentum – IndexAgent can’t start in normal mode

$
0
0

Everybody familiar with Documentum knows that just starting the JBoss/WildFly hosting an IndexAgent isn’t really enough to have the indexing working: the IndexAgent must be started from the UI (or via DA or via the job or via iapi or automatically via the Repository startup or …). Starting the IA in “normal mode” is usually something that takes a few seconds. I faced a few times an IA that apparently didn’t want to start: whenever the request was submitted, it would just try but never succeed. In this blog, I will try to explain why it happens and what can be done to restore it.

When an IndexAgent start, it will do a few things like setup the filters/exclusions, it will check all the parameters configured and finally it will communicate with the Repository to do cleanup. The step that is most probably causing this “issue” is the last one. What happen is that whenever the IndexAgent is running, it will consumes documents for indexing. During this process, it will mark some of the items in the dmi_queue_item table as taken into account. However, if the xPlore Server is stopped during the processing of these items, it might not be fully completed and therefore, there are still tasks in progress that were cancelled. To avoid non-indexed documents, the very first task of the IndexAgent, even before it is marked as started in normal mode, is therefore to reinitialize the status of these items by putting them back into the queue to process. The IndexAgent will never be marked as running if this doesn’t complete and this is what happen whenever you are facing this issue about an IndexAgent just stuck in the start process.

To see the details of the start process of an IndexAgent, you can just look into its log file whenever you submit the request. This is an example of a “working” startup:

2020-11-13 14:29:29,765 INFO FtIndexAgent [http--0.0.0.0-9202-3]DM_INDEX_AGENT_START
2020-11-13 14:29:29,808 INFO Context [http--0.0.0.0-9202-3]Filter cabinets_to_exclude value: Temp, System, Resources,
2020-11-13 14:29:29,808 INFO Context [http--0.0.0.0-9202-3]Filter types_to_exclude value: dmi_expr_code, dmc_jar, dm_method, dm_activity, dmc_module, dmc_aspect_type, dm_registered, dm_validation_descriptor, dm_location, dmc_java_library, dm_public_key_certificate, dm_client_registration, dm_procedure, dmc_dar, dm_process, dmc_tcf_activity_template, dm_ftwatermark, dmc_wfsd_type_info, dm_menu_system, dm_plugin, dm_script, dmc_preset_package, dm_acs_config, dm_business_pro, dm_client_rights, dm_cont_transfer_config, dm_cryptographic_key, dm_docbase_config, dm_esign_template, dm_format_preferences, dm_ftengine_config, dm_ftfilter_config, dm_ftindex_agent_config, dm_jms_config, dm_job, dm_mount_point, dm_outputdevice, dm_server_config, dm_xml_application, dm_xml_config, dm_ftquery_subscription, dm_smart_list,
2020-11-13 14:29:29,808 INFO Context [http--0.0.0.0-9202-3]Filter folders_to_exclude value: /Temp/Jobs, /System/Sysadmin/Reports, /System/Sysadmin/Jobs,
2020-11-13 14:29:29,811 INFO AgentInfo [http--0.0.0.0-9202-3]Start
Documentum Index Agent 1.5.0170.0173
Java Version                    1.7.0_72
DFC Version                     7.2.0170.0165
DMCL Version                    7.2.0170.0165
Docbase (Repo01)                7.2.0160.0297  Linux64.Oracle

Start Configuration Information
 Instance
  indexagent_instance_name(AgentInstanceName)=xplore_server01_9200_IndexAgent
  docbase_name(DocbaseName)=Repo01
  docbase_user(DocbaseUser)=
  docbase_domain(DocbaseDomain)=
  runaway_item_timeout(RunawayItemTimeout)=600000
  runaway_thread_timeout(RunawayThreadTimeout)=600000
  parameter_list(InstanceOptionalParams)
 Status
  frequency(StatusFrequency)=5000
  history_size(StatusHistorySize)=20
 Connectors
  class_name(ClassName)=com.documentum.server.impl.fulltext.indexagent.connector.DocbaseNormalModeConnector
  parameter_list(Options)
   parameter=save_queue_items, value=false
   parameter=queue_user, value=dm_fulltext_index_user
   parameter=wait_time, value=60000
   parameter=batch_size, value=1000
  class_name(ClassName)=com.documentum.server.impl.fulltext.indexagent.connector.FileConnector
  parameter_list(Options)
   parameter=wait_time, value=2000
   parameter=batch_size, value=100
   parameter=file_name, value=ids.txt
 Exporter
  queue_size(PrepQSize)=250
  queue_low_percent(PrepQLowPercentage)=90
  wait_time(PrepWaitTime)=100
  thread_count(PrepWorkers)=2
  shutdown_timeout(PrepShutdownTimeout)=60000
  runaway_timeout(RunawayItemTimeout)=600000
  all_filestores_local(areAll_filestores_local)=false
  local_content_area(LocalContentArea)=/data/primary/Indexagent_Repo01/export
  local_filestore_map(LocalFileStoreMap)
  local_content_remote_mount(LocalContentRemoteMount)=null
  content_clean_interval(ContentCleanInterval)=2000000
  keep_dftxml(KeepDftxml)=false
  parameter_list(PrepOptionalParameters)=
   parameter=contentSizeLimit, value=367001600
 Indexer
  queue_size(IndexQSize)=500
  queue_low_percent(IndexQLowPercentage)=90
  queue_size(CallbackQSize)=200
  queue_low_percent(CallbackQLowPercentage)=90
  wait_time(IndexWaitTime)=100
  thread_count(IndexWorkers)=1
  shutdown_timeout(IndexShutdownTimeout)=60000
  runaway_timeout(IndexRunawayTimeout)60000
  partition_config
   default_partition collection_name(DefaultCollection)=null
  partitions(PartitionMap)
 Indexer Plugin Config
  class_name(IndexerClassName)=com.documentum.server.impl.fulltext.indexagent.plugins.enterprisesearch.DSearchFTPlugin
  parameter_list(IndexerParams)
   parameter=dsearch_qrserver_host, value=lb_xplore_server.domain.com
   parameter=query_plugin_mapping_file, value=/app/dctm/server/fulltext/dsearch/dm_AttributeMapping.xml
   parameter=max_tries, value=2
   parameter=max_pending_requests, value=10000
   parameter=load_balancer_enabled, value=true
   parameter=dsearch_qrserver_protocol, value=HTTPS
   parameter=dsearch_qrygen_mode, value=both
   parameter=security_mode, value=BROWSE
   parameter=max_requests_in_batch, value=10
   parameter=dsearch_qrserver_port, value=9302
   parameter=dsearch_config_port, value=9302
   parameter=dsearch_config_host, value=xplore_server01.domain.com
   parameter=max_batch_wait_msec, value=1000
   parameter=dsearch_qrserver_target, value=/dsearch/IndexServerServlet
   parameter=dsearch_domain, value=Repo01
   parameter=group_attributes_exclude_list, value=i_all_users_names
End Configuration Information

2020-11-13 14:29:29,828 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] running DQL query: select primary_class from dmc_module where any a_interfaces = 'com.documentum.fc.indexagent.IDfCustomIndexFilter'
2020-11-13 14:29:29,833 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.services.message.impl.type.MailMessageChildFilter
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.services.message.impl.type.MailMessageChildFilter
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.server.impl.fulltext.indexagent.filter.defaultCabinetFilterAction
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.server.impl.fulltext.indexagent.filter.defaultFolderFilterAction
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.server.impl.fulltext.indexagent.filter.defaultTypeFilterAction
2020-11-13 14:29:29,869 INFO defaultFilters [http--0.0.0.0-9202-3]Populated cabinet cache for filter CabinetsToExclude with count 3
2020-11-13 14:29:30,462 INFO defaultFilters [http--0.0.0.0-9202-3]Populated folder id cache for filter FoldersToExclude with count 140
2020-11-13 14:29:30,488 INFO DocbaseNormalModeConnector [http--0.0.0.0-9202-3][DM_INDEX_AGENT_QUERY_BEGIN] update dmi_queue_item objects set task_state = ' ', set sign_off_user = ' ', set dequeued_by = ' ', set message = ' ' where name = 'dm_fulltext_index_user' and task_state = 'acquired' and sign_off_user = 'xplore_server01_9200_IndexAgent'
2020-11-13 14:29:30,488 INFO DocbaseNormalModeConnector [http--0.0.0.0-9202-3][DM_INDEX_AGENT_QUERY_UPDATE_COUNT] 0
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server host: xplore_server01.domain.com
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server protocol: HTTPS
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server port: 9302
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server domain: Repo01
2020-11-13 14:29:30,502 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] Index Server Status: normal

 

When this issue occurs, the lines 92 and above will not appear. As you can see, the DQL query executed is actually recorded in the log as well as the number of items updated. The “issue” is that if there are too many items that would match the WHERE clause (acquired items), this query could take hours to complete (if at all) and therefore, it would appear as if the start isn’t working. Because of how DQL works, this kind of query on thousands of objects or more will be very DB intensive and that would introduce a big performance hit.

How is it possible to end-up with hundreds of thousand or even millions of acquired items you may think? Well each time it happened to me, it was in relation to some huge batches or jobs running that would update millions of items or during big migrations/imports of objects. As you know, the events that have been registered in the dmi_registry table will trigger the creation of a new entry in the dmi_queue_item table. Therefore, whenever you are importing a lot of documents for example, it is highly recommended to carefully manage the index table because it can cause huge performance issues since it is used a lot inside Documentum for various purposes. This is especially true whenever Lifecycles are in the picture because then processes (like ApplyD2Config) will generate a lot of dm_save events per documents and therefore duplicates in the table. I won’t go into these details in this blog but in short, you can chose to remove the events from the dmi_registry during the import and put them back afterwards, manually indexing the imported documents at the end or do manual cleanups of the dmi_queue_item table during the process. Unfortunately, if you aren’t aware that a huge migration takes places for example, then the situation can quickly become complicated with millions and millions of items. Last time I saw something similar happening, it was an import started “in secret” before the weekend and filling the dmi_queue_item table. The IndexAgent was initially started and therefore it processed them but it wasn’t fast enough. On the Monday morning, we had the pleasant surprise to see around 6 million of acquired items and 9 more million of awaiting….

I think (to be confirmed) the behavior changed in more recent versions but this environment was using xPlore 1.5 and here, the IndexAgent might pull batches of documents for processing, even if there are still already a lot in process. The xPlore Servers (a Federation) weren’t sleeping at all since they actually processed millions of items already but there were just too many to handle and unfortunately, the IA kind of entered a dead end where updating the dmi_queue_item table would just be too long for the processing to be effective again. I didn’t try to restart the IndexAgent because I knew it would never complete but I thought this might make an interesting blog post. There is probably a KB on the OpenText site describing that since it is rather well known.

As you might expect, triggering a DQL query supposed to update 6 million rows on a table that contains at the very least three times that isn’t gonna happen. So what can be done then to restore the system performance and to allow the IndexAgent to restart properly? DQL isn’t very good for processing of huge batches and therefore, your best bet would be to go to the Database directly to avoid the Documentum overhead. Instead of executing one single SQL command to update the 6 million of items, you should also split it in smaller batches by adding a WHERE clause on the date for example. That would help tremendously and that’s not something that the IndexAgent can do by itself because it has no idea of when things started to go south… So then, which kind of command should be executed? In this case, I wouldn’t recommend to do what the IndexAgent is doing. If you are simply resetting the status from acquired to awaiting, sure the IndexAgent will be able to start but it will still have 6+9 million items awaiting for processing and therefore, you still have bad performance and you have a pretty high probability that the number of acquired will rise again… Therefore, the only reasonable choice is to export all distinct items from the dmi_queue_item table and then clean/remove all FT items. With some luck, you might have 5 or 10 duplicates for each document so instead of indexing 15 million, it would just be 1 or 2 million (distinct).

An example of SQL command to cleanup all the items on a 1 hour timeframe would be for Oracle (I would suggest to make sure the IA isn’t running when messing with the table):

DELETE dmi_queue_item_s
WHERE name='dm_fulltext_index_user'
  AND delete_flag=0
  AND date_sent>=to_date('2020-06-28 22:00:00','YYYY-MM-DD HH24:MI:SS')
  AND date_sent<to_date('2020-06-28 23:00:00','YYYY-MM-DD HH24:MI:SS');
commit;

 

This cleanup can be done online without issue, just make sure you take an export of all distinct item_id to re-index afterwards, otherwise you will have to execute the FT Integrity utility to find the missing documents in the index. With parallel execution on several DB sessions, the cleanup can actually be done rather quickly and then it’s just background processing for the index via the ids.txt for example.

 

Cet article Documentum – IndexAgent can’t start in normal mode est apparu en premier sur Blog dbi services.

Documentum – D2 doesn’t load repositories with “Unexpected error occured”

$
0
0

I had a case today where all Documentum components were up and running, including D2 but while accessing its login page, the repositories wouldn’t appear and a message “An unexpected error occurred. Please refresh your browser” would pop-up in the lower-right corner and disappear quickly. Refreshing the browser or opening a private window wouldn’t do anything. In such cases, of course the first thing to do would be to make sure the docbroker and repositories are responding but if it is the case, then what could be the problem? The root cause of this can be several things I assume since it’s a rather generic behavior but I saw that a few times already and it might not be really obvious at first glance so sharing some thoughts about it might prove useful for someone.

Here is the login screen of D2 having the issue:

In my case, the repositories were apparently available on the Content Server and responding (connection through iapi/idql working). The next step would probably be to check the D2 logs with DEBUG enabled to make sure to capture as much as possible. This is what you would see in the logs whenever accessing the D2 login URL:

2020-11-29 11:12:36,434 UTC [DEBUG] ([ACTIVE] ExecuteThread: '37' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : D2 full build version: 16.5.1050 build 096
2020-11-29 11:12:36,435 UTC [DEBUG] ([ACTIVE] ExecuteThread: '37' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : patch version: 16.5.1050
2020-11-29 11:12:36,886 UTC [DEBUG] ([ACTIVE] ExecuteThread: '66' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.s.s.labels.X3ResourceBundleFactory      : getAllBundle for resources.i18n en
2020-11-29 11:12:36,924 UTC [DEBUG] ([ACTIVE] ExecuteThread: '99' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.p.s.s.settings.RpcSettingsServiceImpl   : Fetching Server properties
2020-11-29 11:12:36,940 UTC [DEBUG] ([ACTIVE] ExecuteThread: '21' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.p.s.s.settings.RpcSettingsServiceImpl   : Fetching Server shiro.ini
2020-11-29 11:12:36,942 UTC [DEBUG] ([ACTIVE] ExecuteThread: '55' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.p.s.s.settings.RpcSettingsServiceImpl   : Fetching Server adminMessage Settings
2020-11-29 11:12:36,978 UTC [DEBUG] ([ACTIVE] ExecuteThread: '84' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.s.s.labels.X3ResourceBundleFactory      : getAllBundle for resources.i18n en_US
2020-11-29 11:12:37,709 UTC [DEBUG] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.common.dctm.objects.DfDocbaseMapEx    : Load docbases from docbrocker 0.623s
2020-11-29 11:12:37,711 UTC [INFO ] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d2fs.dctm.web.services.D2fsRepositories   : Loaded repositories from docbroker: GR_Repo,Repo1
2020-11-29 11:12:37,712 UTC [INFO ] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d2fs.dctm.web.services.D2fsRepositories   : loginRepositoryFilter=GR_Repo
2020-11-29 11:12:37,713 UTC [INFO ] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d2fs.dctm.web.services.D2fsRepositories   : Filtering out repository GR_Repo
2020-11-29 11:12:37,713 UTC [DEBUG] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.config.D2OptionsCache          : D2Info element not for in cache
2020-11-29 11:12:37,713 UTC [ERROR] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.config.D2OptionsCache          : Trying to fetch D2Info before it's been set
2020-11-29 11:12:37,815 UTC [DEBUG] ([ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.D2Session                      : D2Session::initTBOEx after tbos from map 0.000s
2020-11-29 11:12:37,815 UTC [DEBUG] ([ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.D2Session                      : D2Session::initTBOEx after tbos C6-dbor bundle 0.001s
2020-11-29 11:12:38,808 UTC [INFO ] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.x3.portal.server.X3HttpSessionListener  : Created http session 3tGeYTFa9ChEQJP-V7GdMyQreCk3t7_BFfS3EixfHtTbO6qFtOg3!781893690!1606648358808
2020-11-29 11:12:38,809 UTC [DEBUG] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : XSRF_TOKEN not found in session
2020-11-29 11:12:38,811 UTC [DEBUG] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : D2 full build version: 16.5.1050 build 096
2020-11-29 11:12:38,811 UTC [DEBUG] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : patch version: 16.5.1050

 

At first glance, the log content doesn’t look so strange, there are no obvious warnings or errors clearly showing the issue. As you can see, the list of repositories is present, it’s filtered properly so the drop-down should display something but it’s not. The only message that might give you some hint is the one error and its associated debug message just before about the D2OptionsCache: the D2Info elements aren’t in the cache while D2 is trying to use it. In this case, the only way to clearly see what is actually the issue would be to restart the Application Server of D2 to force the LoadOnStartup to be re-executed. Maybe this is only true if the LoadOnStartup is enabled. I didn’t test without but it might be worth to check whether D2 is able to refresh it at runtime in this case. After a restart of the Application Server, it becomes clear what the problem is:

2020-11-29 11:18:28,421 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : d2fs
2020-11-29 11:18:28,426 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : subscriptionsService
2020-11-29 11:18:28,427 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Service Bean is set to Remote
2020-11-29 11:18:28,431 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : exceptionResolver
2020-11-29 11:18:28,433 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : soapProvider
2020-11-29 11:18:28,503 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.servlets.init.LoadOnStartup   : DFC version : 16.4.0200.0080
2020-11-29 11:18:28,543 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - START =====================================
2020-11-29 11:18:28,544 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - HTTP Headers
Remote : null (null)
Locale : null
Request Protocol : null
Request Method : null
Context Path : /D2
Request URI : null
Request encoding : null
Request Parameters :
Request Headers :
2020-11-29 11:18:30,799 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Plugins (0.001s)
2020-11-29 11:18:30,803 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Start plugin before : D2-Widget v16.5.1050 build 096
2020-11-29 11:18:30,804 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - End plugin before : D2-Widget v16.5.1050 build 096 0.000s
2020-11-29 11:18:30,806 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Standard Servlet :
2020-11-29 11:18:30,808 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.servlets.init.LoadOnStartup   : Cache BOCS URL disabled.
2020-11-29 11:18:40,865 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.servlets.init.LoadOnStartup   : Free memory=3.1386707 GB, Total memory=4.0 GB
2020-11-29 11:18:50,217 UTC [ERROR] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - DfException:: THREAD: [ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'; MSG: [DM_STORAGE_E_NOT_ACCESSIBLE]error:  "Storage area filestore_01 is not currently accessible.  Reason:  errno: 2, message: No such file or directory."; ERRORCODE: 100; NEXT: null
2020-11-29 11:18:50,220 UTC [ERROR] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : {}
com.documentum.fc.common.DfException: [DM_STORAGE_E_NOT_ACCESSIBLE]error:  "Storage area filestore_01 is not currently accessible.  Reason:  errno: 2, message: No such file or directory."
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getException(DocbaseMessageManager.java:137)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForMessages(NetwiseDocbaseRpcClient.java:329)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForInt(NetwiseDocbaseRpcClient.java:600)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$6.evaluate(DocbaseConnection.java:1382)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1180)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForInt(DocbaseConnection.java:1375)
        at com.documentum.fc.client.impl.docbase.DocbaseApi.makePuller(DocbaseApi.java:630)
        at com.documentum.fc.client.impl.connection.docbase.RawPuller.<init>(RawPuller.java:22)
        at com.documentum.fc.client.impl.session.Session.makePuller(Session.java:3796)
        at com.documentum.fc.client.impl.session.SessionHandle.makePuller(SessionHandle.java:2468)
        at com.documentum.fc.client.content.impl.BlockPuller.<init>(BlockPuller.java:27)
        at com.documentum.fc.client.content.impl.PusherPullerContentAccessor.buildStreamFromContext(PusherPullerContentAccessor.java:40)
        at com.documentum.fc.client.content.impl.PusherPullerContentAccessor.getStream(PusherPullerContentAccessor.java:28)
        at com.documentum.fc.client.content.impl.ContentAccessorFactory.getStream(ContentAccessorFactory.java:37)
        at com.documentum.fc.client.content.impl.Store.getStream(Store.java:64)
        at com.documentum.fc.client.content.impl.FileStore___PROXY.getStream(FileStore___PROXY.java)
        at com.documentum.fc.client.content.impl.Content.getStream(Content.java:185)
        at com.documentum.fc.client.content.impl.Content___PROXY.getStream(Content___PROXY.java)
        at com.documentum.fc.client.content.impl.ContentManager.getStream(ContentManager.java:84)
        at com.documentum.fc.client.content.impl.ContentManager.namelessGetFile(ContentManager.java:252)
        at com.documentum.fc.client.content.impl.ContentManager.getFile(ContentManager.java:198)
        at com.documentum.fc.client.content.impl.ContentManager.getFile(ContentManager.java:173)
        at com.documentum.fc.client.DfSysObject.getFileEx2(DfSysObject.java:1978)
        at com.documentum.fc.client.DfSysObject.getFileEx(DfSysObject.java:1970)
        at com.documentum.fc.client.DfSysObject.getFile(DfSysObject.java:1965)
        at com.emc.d2.api.config.modules.property.D2PropertyConfig___PROXY.getFile(D2PropertyConfig___PROXY.java)
        at com.emc.common.java.xml.XmlCacheValue.<init>(XmlCacheValue.java:63)
        at com.emc.common.java.xml.XmlCacheImpl.getXmlDocument(XmlCacheImpl.java:154)
        at com.emc.common.java.xml.XmlCacheImpl.getXmlDocument(XmlCacheImpl.java:182)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.loadXmlCache(LoadOnStartup.java:501)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.refreshCache(LoadOnStartup.java:424)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.processRequest(LoadOnStartup.java:208)
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.execute(D2HttpServlet.java:244)
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGetAndPost(D2HttpServlet.java:510)
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGet(D2HttpServlet.java:113)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.init(LoadOnStartup.java:136)
        at javax.servlet.GenericServlet.init(GenericServlet.java:244)
		...
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
2020-11-29 11:18:50,230 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d.dctm.web.services.D2fsSessionManager    : Using non-sso shiro SSO filter with non-sso.enableDFCPrincipalMode=false
2020-11-29 11:18:50,231 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d.dctm.web.services.D2fsSessionManager    : Not using DFC Principal Support
2020-11-29 11:18:50,232 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Free memory=2.5813167 GB. Total memory=4.0 GB.
2020-11-29 11:18:50,232 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - END (21.726s) =====================================

2020-11-29 11:18:50,235 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : D2SecurityConfiguration : Start
2020-11-29 11:18:50,235 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : ServletContext: D2
2020-11-29 11:18:50,269 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : D2SecurityConfiguration : End
2020-11-29 11:18:50,270 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : Free memory=2.5780156 GB, Total memory=4.0 GB

 

So, as you can see above, the issue is actually linked to the Data of the repositories that weren’t available and it is only displayed during the LoadOnStartup execution, then it’s not showing-up anymore. Here, it was the NAS that was unreachable at that time and therefore D2 was impacted and nobody could login to D2. From my point of view, it’s a pity that D2 behaves this way… Even if the Data/Documents aren’t reachable, in a perfect world this shouldn’t prevent you from logging into the system and using it, except for actions involving the content of the documents of course. Browsing the repository, checking properties and some other stuff should work without issue but it’s not because of how Documentum is designed and how it works.

Because the LoadOnStartup actions are only executed at startup (if it is enabled), then it means that once the Data of the repositories are back, you will need to restart D2 again, otherwise the issue will remain. Therefore, if you have this issue and even if the Data are currently available, it might be worth to check whether it was available when D2 started. In addition to that, a restart of D2 never really hurts…

If you encountered this behavior of D2 with another root cause, feel free to share!

 

Cet article Documentum – D2 doesn’t load repositories with “Unexpected error occured” est apparu en premier sur Blog dbi services.

Documentum – IDS on Windows Server not able to start with error 31: device not functioning

$
0
0

Documentum Interactive Delivery Services or IDS is a Documentum product that can be useful to publish some documents to an external web server or something similar. It usually works rather well, even if there hasn’t been much changes in the product in years, maybe because it does what it is supposed to do… As a big fan of Linux systems, I pretty much never work on Windows Servers but when I do, somehow, there are always trouble! Maybe I’m cursed or maybe the OS is really not for me…

Last time I worked on a Windows Server, I had to install an IDS 7.3 on a bunch of servers (POC, DEV, QA, PRD). The POC installation went smoothly and everything was working as expected but then trouble started with the other three where the IDS Service couldn’t start at all with an error 31: device not functioning.

This is a rather generic message, as often. Therefore, looking at the Event Viewer gave some more information:

The text extract of this event is:

<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="DCTM WebCache Server" />
    <EventID Qualifiers="49152">1018</EventID>
    <Level>2</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2020-12-16T12:23:17.000000000Z" />
    <EventRecordID>3105</EventRecordID>
    <Channel>Application</Channel>
    <Computer>hostname.domain.com</Computer>
    <Security />
  </System>
  <EventData>
    <Data>Load JVM DLL Failed on LoadLibrary (%s)</Data>
    <Data>D:\ids\target\product\jre\win\bin\server\jvm.dll</Data>
  </EventData>
</Event>

 

It is rather clear that the issue is related to Java but everything looked good at first sight. Comparing the working server with the non-working one, they both had the same setup and same environment. Listing all environment variables between the two servers showed the same output except for a customer specific value that was apparently the base image used to install the Windows Server (2012R2 02.02 vs 2012R2 02.03). Looking into it further, even if the JAVA_HOME variable wasn’t set on any of the servers, I still tried to add it to see the behavior:

  • Click on the Start button
  • Write “Edit the system environment variables” on the search and click on it
  • Click on the Environment Variables button
  • Create the system (bottom of screen) variable JAVA_HOME with: D:\ids\target\product\jre\win (or whatever path you have installed your IDS to)
  • Update the system (bottom of screen) variable PATH, prepend it with: %JAVA_HOME%\bin;

After doing that, the IDS Service was actually able to start… I do not have the complete explanation but this issue must have been caused by the different OS build. Even if they are both 2012R2 (latest supported version of IDS 7.3), there must be some differences in the customer specific build (automated OS installation) that caused the issue to happen whenever the JAVA_HOME isn’t setup in the environment. This is normally not needed by IDS since Java is included into this product directly and therefore, all the commands and libraries already point to the expected path. Nevertheless, if you are facing the same issue, it might be worth giving it a try!

 

Cet article Documentum – IDS on Windows Server not able to start with error 31: device not functioning est apparu en premier sur Blog dbi services.

Documentum – RCS/CFS Upgrade in silent fails with IndexOutOfBoundsException

$
0
0

Several years ago, I wrote a series of blogs regarding the silent installation of Documentum Components, including for a RCS/CFS (HA part of a Repository). In there, I described the process and gave an example of properties file, with all the parameters that are needed and a quick explanation for each of them. As I described in the previous blogs, and that is true for most of Documentum components, in case you want to upgrade instead of installing from scratch, then you more or less just have to change the “CREATE” action to “UPGRADE“. There is, however, a small specificity for the Remote Content Server and that is the point of this blog.

Trying to upgrade a RCS/CFS by reusing the install silent properties file with the UPGRADE action will give something like that (with DEBUG logs enabled):

[dmadmin@cs-2 ~]$ cat $DM_HOME/install/logs/install.log
12:41:07,100 DEBUG [main]  - ###################The variable is: LOG_IS_READY, value is: true
12:41:07,100 DEBUG [main]  - ###################The variable is: FORMATED_PRODUCT_VERSION_NUMBER, value is: 20.2.0000.0110
12:41:07,101  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: CfsConfigurator
12:41:07,101  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 20.2.0000.0110
12:41:07,101  INFO [main]  -
...
12:41:07,224 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to resolve variable
12:41:07,225 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to check condition
12:41:07,225 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to setup
12:41:07,225 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - *******************Start action com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName***********************
12:41:07,230 ERROR [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Index -1 out of bounds for length 3
java.lang.IndexOutOfBoundsException: Index -1 out of bounds for length 3
        at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
        at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
        at java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
        at java.base/java.util.Objects.checkIndex(Objects.java:372)
        at java.base/java.util.ArrayList.get(ArrayList.java:459)
        at com.documentum.install.multinode.cfs.common.services.DiServerContentServers.getServer(DiServerContentServers.java:192)
        at com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName.setup(DiWAServerCfsTestServerConfigObjectName.java:23)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:73)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.al(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
12:41:07,233  INFO [main]  - The INSTALLER_UI value is SILENT
12:41:07,233  INFO [main]  - The KEEP_TEMP_FILE value is true
...
[dmadmin@cs-2 ~]$

 

This error shows that the installer is failing while trying to get some details of the repository to upgrade. The exception stack isn’t very clear about what exactly it is failing to retrieve: docbase name, dm_server_config name, hostname, service name or something else. Since I don’t have access to the source code, I worked with OpenText on the SR#4593447 to get the insight of what is missing. Turns out that it is actually the Service Name that cannot be found on the properties file. When a RCS/CFS is installed, it will use the property called “SERVER.DOCBASE_SERVICE_NAME” which is described in the previous blog about silent installation. This is the only parameter required for the Service Name. In case of an upgrade, you could think that the installer would be smart enough to go fetch the value from the server.ini directly or, at least, take the same parameter than during installation but that’s not the case. In fact, it only relies on the properties file and it uses another parameter that is only required for upgrade/delete: “SERVER.COMPONENT_NAME“.

Therefore, if you want to upgrade a RCS/CFS, you will need to provide the Service Name for the “SERVER.COMPONENT_NAME” parameter (same value as “SERVER.DOCBASE_SERVICE_NAME“). It’s not a problem to put that in both install and upgrade properties file, you can put as much as you want in these, if Documentum doesn’t recognize the parameter, it will just ignore it. The OpenText Engineers weren’t able to find the reason why there are two different parameters for the same purpose but that comes from way back apparently…

Anyway, once you add the parameter with its value and start again the upgrade of the RCS/CFS, it should work properly:

[dmadmin@cs-2 ~]$ cat $DM_HOME/install/logs/install.log
13:27:16,953 DEBUG [main]  - ###################The variable is: LOG_IS_READY, value is: true
13:27:16,953 DEBUG [main]  - ###################The variable is: FORMATED_PRODUCT_VERSION_NUMBER, value is: 20.2.0000.0110
13:27:16,954  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: CfsConfigurator
13:27:16,954  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 20.2.0000.0110
13:27:16,954  INFO [main]  -
...
13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to resolve variable
13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to check condition
13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to setup
13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - *******************Start action com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName***********************
13:27:17,096 DEBUG [main]  - ###################The variable is: SERVER.SERVER_INI_FILE_NAME, value is: server_cs-2_Repo1.ini
13:27:17,097 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - *******************************end of action********************************
13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - Start to resolve variable
13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - Start to check condition
13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - Start to setup
13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - *******************Start action com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo***********************
13:27:17,100 DEBUG [main]  - ###################The variable is: SERVER.DOCBASE_HOME, value is: $DOCUMENTUM/dba/config/Repo1
13:27:17,101 DEBUG [main]  - ###################The variable is: common.old.aek.key.name, value is: aek.key
13:27:17,101 DEBUG [main]  - ###################The variable is: common.aek.key.name, value is: aek.key
13:27:17,101 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - The aek passphrase is ***************
13:27:17,101 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - *******************************end of action********************************
...
[dmadmin@cs-2 ~]$

 

It’s not always easy to work with silent installation of Documentum because the documentation for that part is quite poor at the moment, this parameter is not documented anywhere for example. I mean, none of the parameters are documented for the CS part but at least there is a kind of “template” usually, under $DM_HOME/install/silent/templates. Unfortunately, this parameter doesn’t appear anyway. So, yes, it might be a little bit difficult but once you have it working, you can gain a lot so it’s still worth to sweat a little.

 

Cet article Documentum – RCS/CFS Upgrade in silent fails with IndexOutOfBoundsException est apparu en premier sur Blog dbi services.

Documentum – Configuration of an IDS Target Memory/RAM usage on Windows

$
0
0

A few months ago, I had to work on a Windows Server to setup an IDS Target. The installation and configuration of the target wasn’t that different compared to a Linux host, so it wasn’t difficult at all (if you ignore some strange behavior like described here for example). But there was one point for which I was a little bit skeptical: how do you configure the IDS Target Memory/RAM assignment for its JVM? On Linux, it’s very easy since the IDS Target configuration will create some start/stop scripts and in these, you can easily find the Java commands executed. Therefore, changing the JVM Memory is just adding the usual Xms/Xmx parameters needed there…

 

Unfortunately, on Windows, IDS will setup a service automatically and this service uses a .exe file, which you, therefore, cannot modify in any way. OpenText (or rather EMC before) could have used a cmd or ps1 script to call the Java command, similarly to Linux or even used a java.ini file somewhere but that’s not the case.

 

By default, the JVM will probably use something like 256Mb of RAM. The exact value will depend on the Java version and potentially on your server as well (how much RAM the host has). There are a lot of blogs or posts already on how to check how much memory is used by the JVM by default but for the quick reference, you can check that with something like:

# Linux:
java -XX:+PrintFlagsFinal -version | grep HeapSize

# Windows:
java -XX:+PrintFlagsFinal -version | findstr HeapSize

 

Having 256Mb of RAM for the IDS Target might be sufficient if the number of files to transfer is rather “small”. However, at some point, you might end-up facing an OutOfMemory error, most probably whenever the IDS Target tries to open the properties.xml file from the previous full-sync or directly during the initial full-sync. If the file is too big (bigger than the Memory of the JVM), it will probably end-up with the OOM and your synchronization will fail.

 

Therefore, how do you increase the default IDS Target JVM settings on Windows? It’s actually not that complicated but you will need to update the registry directly:

  • Open regedit on the target Windows Server
  • Navigate to (that’s an example with secure IDS on port 2787, your path might be different):
    • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\OpenText Documentum IDS Target_secure_2787\Env
  • Double click on the registry key inside this folder named “Values
  • Update the “jvmoptions” definition (around the end normally) to add the Xms and Xmx parameters like:
    • from: “jvmoptions=-Dfile.encoding=UTF-8
    • to: “jvmoptions=-Dfile.encoding=UTF-8 -Xms2g -Xmx4g
  • Restart the IDS Target Service

 

 

With that, the IDS Target should now be allowed to use up to 4GB of RAM, hopefully, which should give you some space to have proper synchronization without OutOfMemory.

 

Cet article Documentum – Configuration of an IDS Target Memory/RAM usage on Windows est apparu en premier sur Blog dbi services.


Documentum – SSL Certificate based secure communication setup

$
0
0

Around four years ago, I did a few presentations, here, in Switzerland about “Security & Documentum”. In there, I talked about a lot of different subjects related to both security and Documentum (you guessed it…) like: ciphers, SHA, FIPS & JCE, Documentum & DFC connect mode (characteristics, ciphers, protocols, encryptions, limitations), Documentum & DFC encryptions in transit and at rest (AEK/DBK/LTK/FSK/FEK, TCS, CS Lockbox, D2 Lockbox vs D2 Keystore, Passwords encryption and decryption), and some other topics (HTTPS on WebLogic, JBoss/WildFly, Tomcat, best practices for security, LDAPS support, FIPS 140-2 support and compliance).

 

“Why the hell are you talking about presentations you gave 4 years ago?”. Good question, thank you! This presentation was really dense so all I could do was just put an example of the configuration files needed for real SSL Certificate based secure communication but not how exactly to reach this point. I talked about this configuration in several blogs already but never took the time to explain/show it from A to Z. So, that’s what I’m going to do here because without being able to create the SSL Certificate and trust stores, you will probably have some trouble to really configure Documentum to use the real-secure mode (in opposition to the default-secure, which is using anonymous and therefore not fully secure).

 

In this blog, I will use self-signed SSL Certificate only. It is possible to use CA signed SSL Certificate, the only thing it would change is that you would need to set the trust chain into the different trust stores instead of the self-signed SSL Certificate. This has pros and cons however… This means it is easier to automate because a CA trust chain is a public SSL Certificate and therefore in case you are in a CI/CD infrastructure, you can easily create the needed Documentum trust stores from anywhere (any pods, any containers, any VMs, aso…). However, that also means that anybody with access to this trust chain can potentially create the needed files used by a DFC Client to talk to your Docbroker and Repositories. That might or might not be a problem for you so I will let you decide on that. On the other hand, using a self-signed SSL Certificate makes it more difficult to gain access to the certificates (unless you are storing it in a public and open location of course) but at the same time, this complicates a little bit the setup for remote DFC Clients since you will need to share, somehow, the Docbroker and Repositories certificates in order to create a trust store for the DFC Clients.

 

I split the steps into different sections: one global definition of parameters & passwords and then one section per component. Please note that for the DFC Client section, I used the JMS. The same steps can be applied for any DFC Client, you will just need to have access to the needed input files. Please make sure that all components are shutdown when you start the configuration, to avoid expected errors: it will be easier to spot errors if something you expect to be working isn’t, if you don’t have hundreds of expected errors in the middle because all clients are still trying to use non-secure (or default-secure) modes. Alright, enough blabbering, let’s start with the setup.

 

I. Global setup/parameters

All the files needed for the Docbroker and Repositories setup needs to be put into the $DOCUMENTUM/dba/secure/ folder so all the commands will be executed in there directly. I defined here some environment variables that will be used by all the commands. The read commands will simply ask you to enter the needed password and press enter. Doing that will store the password into the environment variable (lb_pp, b_pwd, s_pwd and d_pwd). If you aren’t using any Lockbox (since deprecated since Documentum 16.7), just ignore the Lockbox part.

cd $DOCUMENTUM/dba/secure/
lb_name="lockbox.lb"
aek_name="CSaek"
b_name="docbroker"
s_name="contentserver"
d_name="dfc"

read -s -p "  ----> Please enter the ${lb_name} passphrase: " lb_pp

read -s -p "  ----> Please enter the ${b_name} related password: " b_pwd

read -s -p "  ----> Please enter the ${s_name} related password: " s_pwd

read -s -p "  ----> Please enter the ${d_name} related password: " d_pwd

echo "
Lockbox passphrase entered: ${lb_pp}
Broker password entered: ${b_pwd}
Server password entered: ${s_pwd}
DFC password entered: ${d_pwd}"

 

II. Docbroker setup – SSL Server only

In this section, we will create the certificate for the Docbroker, create the needed keystore (it needs to be PKCS12) and encrypt the keystore password. If you aren’t using any Lockbox, in the “dm_encrypt_password” command, just remove the two parameters related to it (and its associated value/password) and remove the “crypto_lockbox” from the Docbroker.ini file (or whatever the name of your file is).

openssl req -x509 -days 1096 -newkey rsa:2048 -keyout ${b_name}.key -out ${b_name}.crt -subj "/C=CH/ST=Jura/L=Delemont/O=dbi services/OU=IT/CN=${b_name}" -passout pass:"${b_pwd}"

openssl pkcs12 -export -out ${b_name}.p12 -inkey ${b_name}.key -in ${b_name}.crt -name ${b_name} -descert -passin pass:"${b_pwd}" -passout pass:"${b_pwd}"

dm_encrypt_password -lockbox "${lb_name}" -lockboxpassphrase "${lb_pp}" -keyname "${aek_name}" -encrypt "${b_pwd}" -file ${b_name}.pwd

cp $DOCUMENTUM/dba/Docbroker.ini $DOCUMENTUM/dba/Docbroker.ini.orig

echo "[DOCBROKER_CONFIGURATION]
secure_connect_mode=secure
crypto_keyname=${aek_name}
crypto_lockbox=${lb_name}
keystore_file=${b_name}.p12
keystore_pwd_file=${b_name}.pwd" > $DOCUMENTUM/dba/Docbroker.ini

 

At this point, you can start the Docbroker and it should start only on the secure port, without errors. If there are still clients up&running, you will probably face a lot of handshake failure errors… It is possible to define the list of ciphers to use in the Docbroker.ini file (cipherlist=xxx:yyy:zzz) but if you do so, please make sure that all the SSL Clients (Repository and DFC Clients alike) that will talk to it does support this cipher as well.

 

III. Repository setup – SSL Server and SSL Client

In this section, we will create the certificate for the Repositories (each repo can have its own if you prefer), create the needed keystore (it needs to be PKCS12), create the needed trust store (it needs to be PKCS7) and encrypt the keystore password. If you aren’t using any Lockbox, in the “dm_encrypt_password” command, just remove the two parameters related to it (and its associated value/password). In case you have several Lockbox and AEK Key, you might want to retrieve their names from the server.ini directly (inside the loop) and then use these to encrypt the password, for each Repository, independently.

openssl req -x509 -days 1096 -newkey rsa:2048 -keyout ${s_name}.key -out ${s_name}.crt -subj "/C=CH/ST=Jura/L=Jura/O=dbi services/OU=IT/CN=${s_name}" -passout pass:"${s_pwd}"

openssl pkcs12 -export -out ${s_name}.p12 -inkey ${s_name}.key -in ${s_name}.crt -name ${s_name} -descert -passin pass:"${s_pwd}" -passout pass:"${s_pwd}"

dm_encrypt_password -lockbox "${lb_name}" -lockboxpassphrase "${lb_pp}" -keyname "${aek_name}" -encrypt "${s_pwd}" -file ${s_name}.pwd

openssl crl2pkcs7 -nocrl -certfile ${b_name}.crt -outform der -out ${s_name}-trust.p7b

for s_ini in $(ls $DOCUMENTUM/dba/config/*/server.ini); do
  cp ${s_ini} ${s_ini}.orig
  sed -i --follow-symlinks "/keystore_file/d" ${s_ini}
  sed -i --follow-symlinks "/keystore_pwd_file/d" ${s_ini}
  sed -i --follow-symlinks "/truststore_file/d" ${s_ini}
  sed -i --follow-symlinks "/cipherlist/d" ${s_ini}
  sed -i --follow-symlinks "/^crypto_keyname/a \truststore_file = ${s_name}-trust.p7b" ${s_ini}
  sed -i --follow-symlinks "/^crypto_keyname/a \keystore_pwd_file = ${s_name}.pwd" ${s_ini}
  sed -i --follow-symlinks "/^crypto_keyname/a \keystore_file = ${s_name}.p12" ${s_ini}
done

 

At this point, you can start the different Repositories and it should start and project itself to the Docbroker. However, The AgentExec should still fail to start properly because it should use the global dfc.properties of the Documentum Server, which wasn’t updated yet. So you might want to configure the global dfc.properties before starting the Repositories. It is possible to define the list of ciphers to use in the server.ini file (cipherlist=xxx:yyy:zzz) but if you do so, please make sure that all the SSL Clients (DFC Clients) that will talk to it and SSL Servers (Docbroker) it talks to does support this cipher as well.

 

IV. DFC Clients setup (JMS, IndexAgent, DA, D2, …) – SSL Client only

In this section, we will create the needed trust store (it needs to be JKS) and encrypt the trust store password. Regarding the password encryption, this command will work on any DFC Client, you will just need to add the dfc.jar in the classpath (for example on xPlore: -cp “$XPLORE_HOME/dfc/dfc.jar”) if you aren’t executing it on a Documentum Server.

openssl x509 -outform der -in ${b_name}.crt -out ${b_name}.der

openssl x509 -outform der -in ${s_name}.crt -out ${s_name}.der

$JAVA_HOME/bin/keytool -importcert -keystore ${d_name}-trust.jks -file ${b_name}.der -alias ${b_name} -noprompt -storepass ${d_pwd}

$JAVA_HOME/bin/keytool -importcert -keystore ${d_name}-trust.jks -file ${s_name}.der -alias ${s_name} -noprompt -storepass ${d_pwd}

d_pwd_enc=$($JAVA_HOME/bin/java com.documentum.fc.tools.RegistryPasswordUtils ${d_pwd})

cp $DOCUMENTUM/config/dfc.properties $DOCUMENTUM/config/dfc.properties.orig
sed -i '/dfc.session.secure_connect_default/d' $DOCUMENTUM/config/dfc.properties
sed -i '/dfc.security.ssl.use_existing_truststore/d' $DOCUMENTUM/config/dfc.properties
sed -i '/dfc.security.ssl.truststore/d' $DOCUMENTUM/config/dfc.properties
sed -i '/dfc.security.ssl.truststore_password/d' $DOCUMENTUM/config/dfc.properties

echo "dfc.session.secure_connect_default=secure
dfc.security.ssl.use_existing_truststore=false
dfc.security.ssl.truststore=$DOCUMENTUM/dba/secure/${d_name}-trust.jks
dfc.security.ssl.truststore_password=${d_pwd_enc}" >> $DOCUMENTUM/config/dfc.properties

 

This is technically the global dfc.properties of a Documentum Server and not really the JMS one but I assume almost everybody in the world is just including this one (using #include) for the dfc.properties of the JMS (ServerApps, acs, bpm, …), to avoid duplication of generic parameters/configurations at multiple locations and just manage them globally.

 

At this point, you can start the DFC Client and it should be able to communicate with the Docbroker and with the Repositories. As said before, if you already started the Repository, you might want to make sure that the AgentExec is running and if not, maybe restart the Repositories quickly.

 

Some final remarks on the SSL Certificate based secure configuration of Documentum:

  • Other Content Servers & Docbrokers (HA part) must re-use the exact same keystores (and therefore trust store as well in the end). Files must be sent to all other hosts and re-used exactly in the same way
  • Other DFC clients can use newly created files but in the end, it will contain the exact same content (either the self-signed Docbroker and Repositories certificates or the CA-signed trust chain)… Therefore, files can be sent to all DFC clients and re-used exactly in the same way as well
  • After the initial generation, you don’t need any of the key, crt or der files anymore so you can remove them for security reasons:
    • rm ${b_name}.key ${b_name}.crt ${b_name}.der ${s_name}.key ${s_name}.crt ${s_name}.der
  • I didn’t describe everything in full-length here, there are a bunch of other things and limitations to know before going into that direction so you will probably want to read the documentation carefully

 

Cet article Documentum – SSL Certificate based secure communication setup est apparu en premier sur Blog dbi services.

A Simple Repository Browser Utility

$
0
0

A few weeks ago, as the final steps of a cloning procedure, I wanted to check if the cloned repository was OK. One of the tests was to peek and poke around in the repository and try to access its content. This is typically the kind of task for which you’d use a GUI-based program because it is much quicker and easier this way rather than by sending manually typed commands to the server from within idql and iapi and transferring the contents to a desktop where a pdf reader, word processor and spreadsheet programs can be used to visualize them. Documentum Administrator (alias DA) is the tool we generally use for this purpose. It is a browser-based java application deployed on a web application server such as Oracle WebLogic (which is overkill just for DA) or tomcat. It also requires IE as the browser because DA needs to download an executable extension for Windows in order to enable certain functionalities. So, I had to download and install the full requirements’ stack to enable DA: an openjdk (several trials before the correct one, an OpenJDK v11, was found), tomcat, DA (twice, one was apparently crippled), configure and deploy DA (with a lots of confusing date errors which could relate to the cloning process but were not, after all), start my Windows VM (all 8 Gb of RAM of it), start IE (which I never use, and you shouldn’t either), point IE to the aws instance DA was installed in, download and install the extension when prompted to do so, all this only to notice that 1. content visualization still did not work and 2. its installation did not stick as it kept asking to download and install the extension over and over again. All this DA part took twice as long as the cloning process itself. All I wanted was to browse the repository, click on a few random files here and there to see if their content was reachable, and to do that I had to install several Gb of, dare I say ?, bloatware. “This is ridiculous”, I thought, there has to be a better way. And indeed there is.
I remembered a cute little python module I use sometimes, server.py. It embarks a web server and presents a navigable web interface to the file system directory it is started from. From there, one can click on a file link and the file is opened in the browser or by the right application if it is installed and the mime file association is correct; or click on a sub-directory link to enter it. Colleagues can also use the URL to come and fetch files from my machines if needed, a quick way to share files, albeit temporarily.
 
Starting the file server in the current directory:

 
Current directory’s listing:

As it is open source, its code is available here server.py.
The file operations per se, mainly calls to the os module, were very few and thin and so I decided to gave it a try, replacing them with calls to the repository through the module DctmApi.py (see blog here DctmAPI.py). The result, after resolving a few issues due to the way Documentum repositories implement the file system metaphor, was quite effective and is presented in this blog. Enjoy.

Installing the module

As the saying goes, The shoemaker’s Son Always Goes Barefoot, so no git hub here and you’ll have to download the module’s original code from the aforementioned site, rename it to original-server.py and patch it. The changes have been kept minimal so that the resulting patch file is small and manageable.
On my Linux box, the downloaded source had extraneous empty lines, which I removed with following one-liner:

$ gawk -v RS='\n\n' '{print}' original-server.py > tmp.py; mv tmp.py original-server.py

After that, save the following patch instructions into the file delta.patch:

623a624,625
> import DctmBrowser
> 
637a640,644
>     session = None 
> 
>     import re
>     split = re.compile('(.+?)\(([0-9a-f]{16})\)')
>     last = re.compile('(.+?)\(([0-9a-f]{16})\).?$')
666,667c673,674
<         f = None
         # now path is a tuple (current path, r_object_id)
>         if DctmBrowser.isdir(SimpleHTTPRequestHandler.session, path):
678,685c685,686
<             for index in "index.html", "index.htm":
<                 index = os.path.join(path, index)
<                 if os.path.exists(index):
<                     path = index
<                     break
<             else:
<                 return self.list_directory(path)
             return self.list_directory(path)
>         f = None
687c688
             f = DctmBrowser.docopen(SimpleHTTPRequestHandler.session, path[1], 'rb')
693c694
             self.send_header("Content-type", DctmBrowser.splitext(SimpleHTTPRequestHandler.session, path[1]))
709a711
>         path is a (r_folder_path, r_object_id) tuple;
712c714
             list = DctmBrowser.listdir(SimpleHTTPRequestHandler.session, path)
718c720
         list.sort(key=lambda a: a[0].lower())
721,722c723,726
<             displaypath = urllib.parse.unquote(self.path,
             if ("/" != self.path):
>                displaypath = "".join(i[0] for i in SimpleHTTPRequestHandler.split.findall(urllib.parse.unquote(self.path, errors='surrogatepass')))
>             else:
>                displaypath = "/"
724c728
             displaypath = urllib.parse.unquote(path[0])
727c731
         title = 'Repository listing for %s' % displaypath
734c738
<         r.append('\n<h1>%s</h1>' % title)
---
>         r.append('<h3>%s</h3>\n' % title)
736,737c740,745
<         for name in list:
         # add an .. for the parent folder;
>         if ("/" != path[0]):
>             linkname = "".join(i[0] + "(" + i[1] + ")" for i in SimpleHTTPRequestHandler.split.findall(urllib.parse.unquote(self.path, errors='surrogatepass'))[:-1]) or "/"
>             r.append('%s' % (urllib.parse.quote(linkname, errors='surrogatepass'), html.escape("..")))
>         for (name, r_object_id) in list:
>             fullname = os.path.join(path[0], name)
740c748
             if DctmBrowser.isdir(SimpleHTTPRequestHandler.session, (name, r_object_id)):
742,749c750,751
<                 linkname = name + "/"
<             if os.path.islink(fullname):
<                 displayname = name + "@"
<                 # Note: a link to a directory displays with @ and links with /
<             r.append('
  • %s
  • ' < % (urllib.parse.quote(linkname, < errors='surrogatepass'), linkname = name + "(" + r_object_id + ")" + "/" > r.append('
  • %s
  • ' % (urllib.parse.quote(linkname, errors='surrogatepass'), html.escape(displayname))) 762,767c764 < """Translate a /-separated PATH to the local filename syntax. < < Components that mean special things to the local file system < (e.g. drive or directory names) are ignored. (XXX They should < probably be diagnosed.) """Extracts the path and r_object_id parts of a path formatted thusly: /....(r_object_id){/....(r_object_id)} 768a766,768 > if "/" == path: > return (path, None) > 773d772 < trailing_slash = path.rstrip().endswith('/') 781c780 path = "/" 787,789c786,787 < if trailing_slash: < path += '/' (path, r_object_id) = SimpleHTTPRequestHandler.last.findall(path)[0] > return (path, r_object_id) 807,840d804 < def guess_type(self, path): < """Guess the type of a file. < < Argument is a PATH (a filename). < < Return value is a string of the form type/subtype, < usable for a MIME Content-type header. < < The default implementation looks the file's extension < up in the table self.extensions_map, using application/octet-stream < as a default; however it would be permissible (if < slow) to look inside the data to make a better guess. < < """ < < base, ext = posixpath.splitext(path) < if ext in self.extensions_map: < return self.extensions_map[ext] < ext = ext.lower() < if ext in self.extensions_map: < return self.extensions_map[ext] < else: < return self.extensions_map[''] < < if not mimetypes.inited: < mimetypes.init() # try to read system mime.types < extensions_map = mimetypes.types_map.copy() < extensions_map.update({ < '': 'application/octet-stream', # Default < '.py': 'text/plain', < '.c': 'text/plain', < '.h': 'text/plain', < }) 1175c1140 ServerClass=HTTPServer, protocol="HTTP/1.0", port=8000, bind="", session = None): 1183a1149 > HandlerClass.session = session 1212d1177 <

    Apply the patch using the following command:

    $ patch -n original-server.py delta.patch -o server.py
    

    server.py is the patched module with the repository access operations replacing the file system access ones.
    As the command-line needs some more parameters for the connectivity to the repository, an updated main block has been added to parse them and moved into the new executable browser_repo.py. Here it is:

    import argparse
    import server
    import textwrap
    import DctmAPI
    import DctmBrowser
    
    if __name__ == '__main__':
        parser = argparse.ArgumentParser(
           formatter_class=argparse.RawDescriptionHelpFormatter,
           description = textwrap.dedent("""\
    A web page to navigate a docbase's cabinets & folders.
    Based on Aukasz Langa python server.py's module https://hg.python.org/cpython/file/3.5/Lib/http/server.py
    cec at dbi-services.com, December 2020, integration with Documentum repositories;
    """))
        parser.add_argument('--bind', '-b', default='', metavar='ADDRESS',
                            help='Specify alternate bind address [default: all interfaces]')
        parser.add_argument('--port', action='store',
                            default=8000, type=int,
                            nargs='?',
                            help='Specify alternate port [default: 8000]')
        parser.add_argument('-d', '--docbase', action='store',
                            default='dmtest73', type=str,
                            nargs='?',
                            help='repository name [default: dmtest73]')
        parser.add_argument('-u', '--user_name', action='store',
                            default='dmadmin',
                            nargs='?',
                            help='user name [default: dmadmin]')
        parser.add_argument('-p', '--password', action='store',
                            default='dmadmin',
                            nargs='?',
                            help=' user password [default: "dmadmin"]')
        args = parser.parse_args()
    
        # Documentum initialization and connecting here;
        DctmAPI.logLevel = 1
    
        # not really needed as it is done in the module itself;
        status = DctmAPI.dmInit()
        if status:
           print("dmInit() was successful")
        else:
           print("dmInit() was not successful, exiting ...")
           sys.exit(1)
    
        session = DctmAPI.connect(args.docbase, args.user_name, args.password)
        if session is None:
           print("no session opened in docbase %s as user %s, exiting ..." % (args.docbase, args.user_name))
           exit(1)
    
        try:
           server.test(HandlerClass=server.SimpleHTTPRequestHandler, port=args.port, bind=args.bind, session = session)
        finally:
           print("disconnecting from repository")
           DctmAPI.disconnect(session)
    

    Save it into file browser_repo.py. This is the new main program.
    Finally, helper functions have been added to interface the main program to the module DctmAPI:

    #
    # new help functions for browser_repo.py;
    #
    
    import DctmAPI
    
    def isdir(session, path):
       """
       return True if path is a folder, False otherwise;
       path is a tuple (r_folder_path, r_object_id);
       """
       if "/" == path[0]:
          return True
       else:
          id = DctmAPI.dmAPIGet("retrieve, " + session + ",dm_folder where r_object_id = '" + path[1] + "'")
       return id
    
    def listdir(session, path):
       """
       return a tuple of objects, folders or documents with their r_object_id, in folder path[0];
       path is a tuple (r_folder_path, r_object_id);
       """
       result = []
       if path[0] in ("/", ""):
          DctmAPI.select2dict(session, "select object_name, r_object_id from dm_cabinet", result)
       else:
          DctmAPI.select2dict(session, "select object_name, r_object_id from dm_document where folder(ID('" + path[1] + "')) UNION select object_name, r_object_id from dm_folder where folder(ID('" + path[1] + "'))", result)
       return [[doc["object_name"], doc["r_object_id"]] for doc in result]
    
    def docopen(session, r_object_id, mode):
       """
       returns a file handle on the document with id r_object_id downloaded from its repository to the temporary location and opened;
       """
       temp_storage = '/tmp/'
       if DctmAPI.dmAPIGet("getfile," + session + "," + r_object_id + "," + temp_storage + r_object_id):
          return open(temp_storage + r_object_id, mode)
       else:
          raise OSError
    
    def splitext(session, r_object_id):
       """
       returns the mime type as defined in dm_format for the document with id r_object_id;
       """
       result = []
       DctmAPI.select2dict(session, "select mime_type from dm_format where r_object_id in (select format from dmr_content c, dm_document d where any c.parent_id = d.r_object_id and d.r_object_id = '" + r_object_id + "')", result)
       return result[0]["mime_type"] if result else ""
    

    Save this code into the file DctmBrowser.py.
    To summarize, we have:
    1. the original module original_server.py to be downloaded from the web
    2. delta.patch, the diff file used to patch original_server.py into file server.py
    3. DctmAPI.py, the python interface to Documentum, to be fetched from the provided link to a past blog
    4. helper functions in module DctmBrowser.py
    5. and finally the main executable browser_repo.py
    Admittedly, a git repository would be nice here, maybe one day …
    Use the command below to get the program’s help screen:

    $ python browser_repo.py --help                        
    usage: browser_repo.py [-h] [--bind ADDRESS] [--port [PORT]] [-d [DOCBASE]]
                          [-u [USER_NAME]] [-p [PASSWORD]]
    
    A web page to navigate a docbase's cabinets & folders.
    Based on Aukasz Langa python server.py's module https://hg.python.org/cpython/file/3.5/Lib/http/server.py
    cec at dbi-services.com, December 2020, integration with Documentum repositories;
    
    optional arguments:
      -h, --help            show this help message and exit
      --bind ADDRESS, -b ADDRESS
                            Specify alternate bind address [default: all
                            interfaces]
      --port [PORT]         Specify alternate port [default: 8000]
      -d [DOCBASE], --docbase [DOCBASE]
                            repository name [default: dmtest73]
      -u [USER_NAME], --user_name [USER_NAME]
                            user name [default: dmadmin]
      -p [PASSWORD], --password [PASSWORD]
                            user password [default: "dmadmin"]
    

    Thus, the command below will launch the server on port 9000 with a session opened in repository dmtest73 as user dmadmin with password dmadmin:

    $ python browse_repo.py --port 9000 -d dmtest73 -u dmadmin -p dmadmin 
    

    If you prefer long name options, use the alternative below:

    $ python browser_repo.py --port 9000 --docbase dmtest73 --user_name dmadmin --password dmadmin 
    

    Start your favorite browser, any browser, just as God intended it in the first place, and point it to the host where you started the program with the specified port, e.g. http://192.168.56.10:9000/:

    You are gratified with a very spartan, yet effective, view on the repository’s cabinets. Congratulations, you did it !

    Moving around in the repository

    As there is no root directory in a repository, the empty path or “/” are interpreted as a request to display a list of all the cabinets; each cabinet is a directory’s tree root. The program displays dm_folders and dm_cabinets (which are sub-types of dm_folder after all), and dm_document. Folders have a trailing slash to identify them, whereas documents have none. There are many other objects in repositories’ folders and I chose not to display them because I did not need to but this can be changed on lines 25 and 27 in the helper module DctmBrowser.py by specifying a different doctype, e.g. the super-type dm_sysobject instead.
    An addition to the original server module is the .. link to the parent folder; I think it is easier to use it rather than the browser’s back button or right click/back arrow, but those are still usable since the program is stateless. Actually, a starting page could even be specified manually in the starting URL if it weren’t for its unusual format. In effect, the folders components and documents’ full path in URLs and html links are suffixed with a parenthesized r_object_id, e.g.:

    http://192.168.56.10:9000/System(0c00c35080000106)/Sysadmin(0b00c3508000034e)/Reports(0b00c35080000350)/
    -- or, url-encoded:
    http://192.168.56.10:9000/System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/Reports%280b00c35080000350%29/
    

    This looks ugly but it allows to solve 2 issues specific to repositories:
    1. Document names are not unique in the same folder but are on the par with any other document’s attribute. Consequently, a folder can quietly contains hundreds of identically named documents without any name conflict. In effect, what tells two documents apart is their unique r_object_id attribute and that is the reason why it is appended to the links and URLs. This is not a big deal because this potentially annoying technical information is not displayed in the web page but is only visible while hovering over links and in the browser’s address bar.
    2. Document names can contain any character, even “/” and “:”. So, given a document’s full path name, how to parse it and separate the parent folder from the document’s name so it can be reached ? There is no generic, unambiguous way to do that. With the appended document’s unique r_object_id, it is a simple matter to extract the id from the full path and Bob’s your uncle (RIP Jerry P.).
    Both above specificities make it impossible to access a document through its full path name, therefore the documents’ ids must be carried around; for folder, it is not necessary but it has been done in order to have an uniform format. As a side-effect, database performance is also possibly better.
    If the program is started with no stdout redirection, log messages are visible on the screen, e.g.:

    dmadmin@dmclient:~/dctm-webserver$ python browser_repo.py --port 9000 --docbase dmtest73 --user_name dmadmin --password dmadmin 
    dmInit() was successful
    Serving HTTP on 0.0.0.0 port 9000 ...
    192.168.56.1 - - [05/Dec/2020 22:57:00] "GET / HTTP/1.1" 200 -
    192.168.56.1 - - [05/Dec/2020 22:57:03] "GET /System%280c00c35080000106%29/ HTTP/1.1" 200 -
    192.168.56.1 - - [05/Dec/2020 22:57:07] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/ HTTP/1.1" 200 -
    192.168.56.1 - - [05/Dec/2020 22:57:09] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/Reports%280b00c35080000350%29/ HTTP/1.1" 200 -
    192.168.56.1 - - [05/Dec/2020 22:57:14] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/Reports%280b00c35080000350%29/ConsistencyChecker%280900c3508000211e%29/ HTTP/1.1" 200 -
    192.168.56.1 - - [05/Dec/2020 22:57:22] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/Reports%280b00c35080000350%29/StateOfDocbase%280900c35080002950%29/ HTTP/1.1" 200 -
    192.168.56.1 - - [05/Dec/2020 22:57:27] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/ HTTP/1.1" 200 -
    ...
    

    The logged information and format are quite standard for web servers, one log line per request, beginning with the client’s ip address, the timestamp, request type (there will be only GETs as the utility is read-only) and resource, and the returned http status code.
    If the variable DctmAPI.logLevel is set to True (or 1 or an non-empty string or collection, as python interprets them all as the boolean True) in the main program, API statements and messages from the repository are logged to stdout too, which can help if troubleshooting is needed, e.g.:

    dmadmin@dmclient:~/dctm-webserver$ python browser_repo.py --port 9000 --docbase dmtest73 --user_name dmadmin --password dmadmin 
    'in dmInit()' 
    "dm= after loading library libdmcl.so" 
    'exiting dmInit()' 
    dmInit() was successful
    'in connect(), docbase = dmtest73, user_name = dmadmin, password = dmadmin' 
    'successful session s0' 
    '[DM_SESSION_I_SESSION_START]info:  "Session 0100c35080002e3d started for user dmadmin."' 
    'exiting connect()' 
    Serving HTTP on 0.0.0.0 port 9000 ...
    'in select2dict(), dql_stmt=select object_name, r_object_id from dm_cabinet' 
    192.168.56.1 - - [05/Dec/2020 23:02:59] "GET / HTTP/1.1" 200 -
    "in select2dict(), dql_stmt=select object_name, r_object_id from dm_document where folder(ID('0c00c35080000106')) UNION select object_name, r_object_id from dm_folder where folder(ID('0c00c35080000106'))" 
    192.168.56.1 - - [05/Dec/2020 23:03:03] "GET /System%280c00c35080000106%29/ HTTP/1.1" 200 -
    "in select2dict(), dql_stmt=select object_name, r_object_id from dm_document where folder(ID('0b00c3508000034e')) UNION select object_name, r_object_id from dm_folder where folder(ID('0b00c3508000034e'))" 
    192.168.56.1 - - [05/Dec/2020 23:03:05] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/ HTTP/1.1" 200 -
    "in select2dict(), dql_stmt=select object_name, r_object_id from dm_document where folder(ID('0b00c35080000350')) UNION select object_name, r_object_id from dm_folder where folder(ID('0b00c35080000350'))" 
    192.168.56.1 - - [05/Dec/2020 23:03:10] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/Reports%280b00c35080000350%29/ HTTP/1.1" 200 -
    "in select2dict(), dql_stmt=select mime_type from dm_format where r_object_id in (select format from dmr_content c, dm_document d where any c.parent_id = d.r_object_id and d.r_object_id = '0900c3508000211e')" 
    192.168.56.1 - - [05/Dec/2020 23:03:11] "GET /System%280c00c35080000106%29/Sysadmin%280b00c3508000034e%29/Reports%280b00c35080000350%29/ConsistencyChecker%280900c3508000211e%29/ HTTP/1.1" 200 -
    

    Feel free to initialize that variable from the command-line if you prefer.
    A nice touch in the original module is that execution errors are trapped in an exception handler so the program does not need to be restarted in case of failure. As it is stateless, errors have no effect on subsequent requests.
    Several views on the same repositories can be obtained by starting several instances of the program at once with different listening ports. Similarly, if one feels the urge to explore several repositories at once, just start as many modules as needed with different listening ports and appropriate credentials.
    To exit the program, just type ctrl-c; no data will be lost here as the program just browses repositories in read-only mode.

    A few comments on the customizations

    Lines 8 and 9 in the diff above introduce the regular expressions that will be used later to extract the path component/r_object_id couples from the URL’s path part; “split” is for one such tuple anywhere in the path and “last” is for the last one and is aimed at getting the r_object_id of the folder that is clicked on from its full path name. python’s re module allows to pre-compile them for efficiency. Note the .+? syntax to specify a non-greedy regular expression.
    On line 13, the function isdir() is now implemented in the module DctmBrowser and returns True if the clicked item is a folder.
    Similarly, line 25 calls a reimplementation of os.open() in module DctmBrowser that exports locally the clicked document’s content to /tmp, opens it and returns the file handle; this will allow the content to be sent to the browser for visualization.
    Line 31 calls a reimplementation of os.listdir() to list the content of the clicked repository folder.
    Line 37 applies the “split” regular expression to the current folder path to extract its tuple components (returned in an array of sub-path/r_object_id couples) and then concatenating the sub-paths together to get the current folder to be displayed later. More concretely, it allows to go from
    /System(0c00c35080000106)/Sysadmin(0b00c3508000034e)/Reports(0b00c35080000350)/
    to
    /System/Sysadmin/Reports
    which is displayed in the html page’s title.
    The conciseness of the expression passed to the join() is admirable; lots of programming mistakes and low-level verbosity is prevented thanks to python’s list comprehensions.
    Similarly, on line 52, the current folder’s parent folder is computed from the current path.
    On line 86, the second regular expression, “last”, is applied to extract the r_object_id of the current folder (i.e. the one that is clicked on).
    Line 89 to 121 were removed from the original module because mime processing is much simplified as the repository maintains a list of mime formats (table dm_format) and the selected document’s mime type can be found by just looking up that table, see function splitext() in module DctmBrowser, called on line 27. By returning to it a valid mime type, the browser can cleverly process the content, i.e. display the supported content types (such as text) and prompt for some other action if not (e.g. office documents).
    One line 126, the session id is passed to class SimpleHTTPRequestHandler and stored as a class variable; later it is referenced as SimpleHTTPRequestHandler.session in the class but self.session would work too, although I prefer the former syntax as it makes clear that session does not depend on the instantiations of the class; the session is valid for any such instantiations. As the program connects to only one repository at startup time, no need to make session an instance variable.
    The module DctmBrowser is used as a bridge between the module DctmAPI and the main program browser_repo.py. This is were most of the repository stuff is done. As it is blatant here, not much is needed to go from listing directories and files from a filesystem to listing folders and documents from a repository.

    Security

    As showed by the usage message above (option ––help), a bind address can be specified. By default, the embedded web server listens on all the machine’s network interfaces and, as there is not identification against the web server, another machine on the same network could reach the web server on that machine and access the repository through the opened session, if there is no firewall in the way. To prevent this, just specify the loopback IP adress, 127.0.0.1 or localhost:

    dmadmin@dmclient:~/dctm-webserver$ python browser_repo.py --bind 127.0.0.1 --port 9000 --docbase dmtest73 --user_name dmadmin --password dmadmin 
    ...
    Serving HTTP on 127.0.0.1 port 9000 ...
    
    # testing locally (no GUI on server, using wget):
    dmadmin@dmclient:~/dctm-webserver$ wget 127.0.0.1:9000
    --2020-12-05 22:06:02--  http://127.0.0.1:9000/
    Connecting to 127.0.0.1:9000... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 831 
    Saving to: 'index.html'
    
    index.html                                           100%[=====================================================================================================================>]     831  --.-KB/s    in 0s      
    
    2020-12-05 22:06:03 (7.34 MB/s) - 'index.html' saved [831/831]
    
    dmadmin@dmclient:~/dctm-webserver$ cat index.html 
    Repository listing for /
    
    

    Repository listing for /



    In addition, as the web server carries the client’s IP address (variable self.address_string), some more finely tuned address restriction could also be implemented by filtering out unwelcome clients and letting in authorized ones.
    Presently, the original module does not support https and hence the network traffic between clients and server is left unencrypted. However, one could imagine to install a small nginx or apache web server as a front on the same machine, setup security at their level and insert a redirection to the python module listening on localhost with the http protocol, a quick and easy solution that does not required any change in the code, although that would be way out of scope of the module, whose primary goal is to serve requests from the same machine it is running on. Note that if we’re starting talking about adding another web server, we could as well move all the repository browsing code into a separate (Fast)CGI python program directly invoked by the web server and make it available to any allowed networked users as a full blown service complete with authentication and access rights.

    Conclusion

    This tool is really a nice utility for browsing repositories, especially those running a Unix/linux machines because most of the time the servers are headless and have no GUI applications installed. The tool interfaces any browser, running on any O/S or device, with such repositories and alleviate the usual burden of executing getfile API statements and scp commands to transfer the contents to the desktop for visualization. For this precise functionality, it is even better than dqman, at least for browsing and visualizing browser-readable contents.
    There is a lot of room for improvement if one would like a full repository browser, e.g. to display the metadata as well. In addition, if needed, the original module’s functionality, browsing the local sub-directory tree, could be reestablished as it is not incompatible with repositories.
    The tool also proves again that the approach of picking an existing tool that implements most of the requirements, and customizing it to a specific need is quite an very effective one.

    Cet article A Simple Repository Browser Utility est apparu en premier sur Blog dbi services.

    DctmAPI.py revisited

    $
    0
    0

    2 years ago, I proposed a ctypes-based Documentum extension for python, DctmAPI.py. While it did the job, it was quite basic. For example, its select2dict() function, as inferred from its name, returned the documents from a dql query into a list of dictionaries, one per document, all in memory. While this is OK for testing and demonstration purpose, it can potentially put some stress on the available memory; besides, do we really need to hold at once in memory a complete result set with several hundreds thousands rows ? It makes more sense to iterate and process the result row by row. For instance, databases have cursors for that purpose.
    Another rudimentary demonstration function was select(). Like select2dict(), it executed a dql statement but output the result row by row to stdout without any special attempt at pretty printing it. The result was quite crude, yet OK for testing purposes.
    So, after 2 years, I thought it was about time to revamp this interface and make it more practical. A new generator-based function, co_select(), has been introduced for a more efficient processing of the result set. select2dict() is still available for those cases where it is still handy to have a full result set in memory and the volume is manageable; actually, select2dict() is now down to 2 lines, the second one being a list comprehension around co_select() (see the listing below). select() has become select_to_stdout() and its output much enhanced; it can be json or tabular, with optional column-wrapping à la sql*plus and colorization as well, all stuff I mentioned several times in the past, e.g. here. Moreover, a pagination functionality has been added through the functions paginate() and paginate_to_stdout(). Finally, exceptions and message logging have been used liberally. As it can be seen, those are quite some improvements from the original version. Of course, there are so many way to implement them depending on the level of usability and performance that is looked for. Also, new functionalities, maybe unexpected ones as of this writing, can be felt necessary, so the current functions are only to be taken as examples.
    Let’s see how the upgraded module looks like now.

    Listing of DctmAPI.py

    """
    This module is a python - Documentum binding based on ctypes;
    requires libdmcl40.so/libdmcl.so to be reachable through LD_LIBRARY_PATH;
    initial version, C. Cervini - dbi-services.com - May 2018
    revised, C. Cervini - dbi-services.com - December 2020
    
    The binding works as-is for both python2 amd python3; no recompilation required; that's the good thing with ctypes compared to e.g. distutils/SWIG;
    Under a 32-bit O/S, it must use the libdmcl40.so, whereas under a 64-bit Linux it must use the java backed one, libdmcl.so;
    
    For compatibility with python3 (where strings are now unicode ones and no longer arrays of bytes, ctypes strings parameters are always converted to unicode, either by prefixing them
    with a b if litteral or by invoking their encode('ascii', 'ignore') method; to get back to text from bytes, b.decode() is used;these works in python2 as well as in python3 so the source is compatible with these two versions of the language;
    
    Because of the use of f-strings formatting, python 3.5 minimum is required;
    """
    
    import os
    import ctypes
    import sys, traceback
    import json
    
    # use foreign C library;
    # use this library in eContent server < v6.x, 32-bit Linux;
    dmlib = '/home/dmadmin/documentum53/libdmcl40.so'
    dmlib = 'libdmcl40.so'
    
    # use this library in eContent server >= v6.x, 64-bit Linux;
    dmlib = 'libdmcl.so'
    
    # used by ctypes;
    dm = 0
    
    # maximum cache size in rows;
    # used while calling the paginate() function;
    # set this according to the row size and the available memory;
    # set it to 0 for unlimited memory;
    MAX_CACHE_SIZE = 10000
    
    # incremental log verbosity levels, i.e. include previous levels;
    class LOG_LEVEL:
       # no logging;
       nolog = 0
    
       # informative messages;
       info = 1
    
       # errors, i.e. exceptions messages and less;
       error = 2
    
       # debug, i.e. functions calls and less;
       debug = 3
    
       # current active level;
       log_level = error
       
    class dmException(Exception):
       """
       generic, catch-all documentum exception;
       """
       def __init__(self, origin = "", message = None):
          super().__init__(message)
          self.origin = origin
          self.message = message
    
       def __repr__(self):
          return f"exception in {self.origin}: {self.message if self.message else ''}"
    
    def show(level = LOG_LEVEL.error, mesg = "", beg_sep = "", end_sep = ""):
       """
       displays the message msg if allowed
       """
       if LOG_LEVEL.log_level > LOG_LEVEL.nolog and level <= LOG_LEVEL.log_level:
          print(f"{beg_sep} {repr(mesg)} {end_sep}")
    
    def dmInit():
       """
       initializes the Documentum part;
       returns True if successfull, False otherwise;
       since they already have an implicit namespace through their dm prefix, dm.dmAPI* would be redundant so we define later dmAPI*() as wrappers around their respective dm.dmAPI*() functions;
       returns True if no error, False otherwise;
       """
       show(LOG_LEVEL.debug, "in dmInit()")
       global dm
       try:
          dm = ctypes.cdll.LoadLibrary(dmlib);  dm.restype = ctypes.c_char_p
          show(LOG_LEVEL.debug, f"in dmInit(), dm= {str(dm)} after loading library {dmlib}")
          dm.dmAPIInit.restype    = ctypes.c_int;
          dm.dmAPIDeInit.restype  = ctypes.c_int;
          dm.dmAPIGet.restype     = ctypes.c_char_p;      dm.dmAPIGet.argtypes  = [ctypes.c_char_p]
          dm.dmAPISet.restype     = ctypes.c_int;         dm.dmAPISet.argtypes  = [ctypes.c_char_p, ctypes.c_char_p]
          dm.dmAPIExec.restype    = ctypes.c_int;         dm.dmAPIExec.argtypes = [ctypes.c_char_p]
          status  = dm.dmAPIInit()
       except Exception as e:
          show(LOG_LEVEL.error, "exception in dmInit():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       else:
          status = True
       finally:
          show(LOG_LEVEL.debug, "exiting dmInit()")
          return status
       
    def dmAPIDeInit():
       """
       releases the memory structures in documentum's library;
       returns True if no error, False otherwise;
       """
       show(LOG_LEVEL.debug, "in dmAPIDeInit()")
       try:
          dm.dmAPIDeInit()
       except Exception as e:
          show(LOG_LEVEL.error, "exception in dmAPIDeInit():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       else:
          status = True
       finally:
          show(LOG_LEVEL.debug, "exiting dmAPIDeInit()")
          return status
       
    def dmAPIGet(s):
       """
       passes the string s to dmAPIGet() method;
       returns a non-empty string if OK, None otherwise;
       """
       show(LOG_LEVEL.debug, "in dmAPIGet()")
       try:
          value = dm.dmAPIGet(s.encode('ascii', 'ignore'))
       except Exception as e:
          show(LOG_LEVEL.error, "exception in dmAPIGet():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       else:
          status = True
       finally:
          show(LOG_LEVEL.debug, "exiting dmAPIGet()")
          return value.decode() if status and value is not None else None
    
    def dmAPISet(s, value):
       """
       passes the string s to dmAPISet() method;
       returns TRUE if OK, False otherwise;
       """
       show(LOG_LEVEL.debug, "in dmAPISet()")
       try:
          status = dm.dmAPISet(s.encode('ascii', 'ignore'), value.encode('ascii', 'ignore'))
       except Exception as e:
          show(LOG_LEVEL.error, "exception in dmAPISet():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       else:
          status = True
       finally:
          show(LOG_LEVEL.debug, "exiting dmAPISet()")
          return status
    
    def dmAPIExec(stmt):
       """
       passes the string s to dmAPIExec() method;
       returns TRUE if OK, False otherwise;
       """
       show(LOG_LEVEL.debug, "in dmAPIExec()")
       try:
          status = dm.dmAPIExec(stmt.encode('ascii', 'ignore'))
       except Exception as e:
          show(LOG_LEVEL.error, "exception in dmAPIExec():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       else:
          # no error, status is passed through, to be converted to boolean below;
          pass
       finally:
          show(LOG_LEVEL.debug, "exiting dmAPIExec()")
          return True == status 
    
    def connect(docbase, user_name, password):
       """
       connects to given docbase as user_name/password;
       returns a session id if OK, None otherwise
       """
       show(LOG_LEVEL.debug, "in connect(), docbase = " + docbase + ", user_name = " + user_name + ", password = " + password) 
       try:
          session = dmAPIGet(f"connect,{docbase},{user_name},{password}")
          if session is None:
             raise dmException(origin = "connect()", message = f"unsuccessful connection to docbase {docbase} as user {user_name}")
       except dmException as dme:
          show(LOG_LEVEL.error, dme)
          show(LOG_LEVEL.error, dmAPIGet(f"getmessage,{session}").rstrip())
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          session = None
       else:
          show(LOG_LEVEL.debug, f"successful session {session}")
          # emptying the message stack in case some are left form previous calls;
          while True:
             msg = dmAPIGet(f"getmessage,{session}").rstrip()
             if msg is None or not msg:
                break
             show(LOG_LEVEL.debug, msg)
       finally:
          show(LOG_LEVEL.debug, "exiting connect()")
          return session
    
    def execute(session, dql_stmt):
       """
       execute non-SELECT DQL statements;
       returns TRUE if OK, False otherwise;
       """
       show(LOG_LEVEL.debug, f"in execute(), dql_stmt={dql_stmt}")
       try:
          query_id = dmAPIGet(f"query,{session},{dql_stmt}")
          if query_id is None:
             raise dmException(origin = "execute()", message = f"query {dql_stmt}")
          err_flag = dmAPIExec(f"close,{session},{query_id}")
          if not err_flag:
             raise dmException(origin = "execute()", message = "close")
       except dmException as dme:
          show(LOG_LEVEL.error, dme)
          show(LOG_LEVEL.error, dmAPIGet(f"getmessage,{session}").rstrip())
          status = False
       except Exception as e:
          show(LOG_LEVEL.error, "exception in execute():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       else:
          status = True
       finally:
          show(LOG_LEVEL.debug, "exiting execute()")
          return status
    
    def co_select(session, dql_stmt, numbering = False):
       """
       a coroutine version of former select2dict;
       the result set is returned of row at a time as a dictionary by a yield statement, e.g.:
       {"attr-1": "value-1", "attr-2": "value-2", ... "attr-n": "value-n"}
       in case of repeating attributes, value is an array of values, e.g.:
       { .... "attr-i": ["value-1", "value-2".... "value-n"], ....}
       """
       show(LOG_LEVEL.debug, "in co_select(), dql_stmt=" + dql_stmt)
       try:
          query_id = dmAPIGet(f"query,{session},{dql_stmt}")
          if query_id is None:
             show(LOG_LEVEL.error, f'in co_select(), error in dmAPIGet("query,{session},{dql_stmt}")')
             raise dmException(origin = "co_select", message = f"query {dql_stmt}")
    
          # counts the number of returned rows in the result set;
          row_counter = 0
    
          # list of attributes returned by query;
          # internal use only; the caller can compute it at will through the following expression: results[0].keys();
          attr_names = []
    
          # default number of rows to return at once;
          # can be dynamically changed by the caller through send();
          size = 1
    
          # multiple rows are returned as an array of dictionaries;
          results = []
    
          # iterate through the result set;
          while dmAPIExec(f"next,{session},{query_id}"):
             result = {"counter" : f"{row_counter + 1}"} if numbering else {}
             nb_attrs = dmAPIGet(f"count,{session},{query_id}")
             if nb_attrs is None:
                raise dmException(origin = "co_select", message = "count")
             nb_attrs = int(nb_attrs) 
             for i in range(nb_attrs):
                if 0 == row_counter:
                   # get the attributes' names only once for the whole query;
                   value = dmAPIGet(f"get,{session},{query_id},_names[{str(i)}]")
                   if value is None:
                      raise dmException(origin = "co_select", message = f"get ,_names[{str(i)}]")
                   attr_names.append(value)
    
                is_repeating = dmAPIGet(f"repeating,{session},{query_id},{attr_names[i]}")
                if is_repeating is None:
                   raise dmException(origin = "co_select", message = f"repeating {attr_names[i]}")
                is_repeating = 1 == int(is_repeating)
    
                if is_repeating:
                   # multi-valued attributes;
                   result[attr_names[i]] = []
                   count = dmAPIGet(f"values,{session},{query_id},{attr_names[i]}")
                   if count is None:
                      raise dmException(origin = "co_select", message = f"values {attr_names[i]}")
                   count = int(count)
    
                   for j in range(count):
                      value = dmAPIGet(f"get,{session},{query_id},{attr_names[i]}[{j}]")
                      if value is None:
                         value = "null"
                      result[attr_names[i]].append(value)
                else:
                   # mono-valued attributes;
                   value = dmAPIGet(f"get,{session},{query_id},{attr_names[i]}")
                   if value is None:
                      value = "null"
                   result[attr_names[i]] = value
    
             row_counter += 1
             results.append(result)
    
             size -= 1
             if size > 0:
                # a grouping has been requested;
                continue
             
             while True:
                # keeps returning the same results until the group size is non-negative;
                # default size value if omitted is 1, so next(r) keeps working;
                # if the size is 0, abort the result set;
                size = yield results
                if size is None:
                   # default value is 1;
                   size = 1
                   break
                if size >= 0:
                   # OK if size is positive or 0;
                   break
             results = []
             if 0 == size: break
    
          err_flag = dmAPIExec(f"close,{session},{query_id}")
          if not err_flag:
             raise dmException(origin = "co_select", message = "close")
    
          # if here, it means that the full result set has been read;
          # the finally clause will return the residual (i.e. out of the yield statement above) rows;
    
       except dmException as dme:
          show(LOG_LEVEL.error, dme)
          show(LOG_LEVEL.error, dmAPIGet(f"getmessage,{session}").rstrip())
       except Exception as e:
          show(LOG_LEVEL.error, "exception in co_select():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
       finally:
          # close the collection;
          try:
             show(LOG_LEVEL.debug, "exiting co_select()")
             dmAPIExec(f"close,{session},{query_id}")
          except Exception as e:
             pass
          return results
          # for some unknown reason, an exception is raised on returning ...;
          # let the caller handle it;
    
    def select_to_dict(session, dql_stmt, numbering = False):
       """
       new version of the former select2dict();
       execute in session session the DQL SELECT statement passed in dql_stmt and return the result set into an array of dictionaries;
       as the whole result set will be held in memory, be sure it is really necessary and rather use the more efficient co_select();
       """
       result = co_select(session, dql_stmt, numbering)
       return [row for row in result]
    
    def result_to_stdout(result, format = "table", column_width = 20, mode = "wrap", frame = True, fg_color = "BLACK", bg_color = "white", alt_period = 5, col_mode = 2, numbering = False):
       """
          print the list of dictionaries result into a table with column_width-wide columns and optional wrap-around and frame;
          result can be a generator from co_select() or an array of dictionaries;
          the output is like from idql only more readable with column wrap-around if values are too wide;
          if frame is True, a frame identical to the one from mysql/postgresql is drawn around the table;
          in order to increase readability, rows can be colorized by specifying a foreground and a background colors;
          alt_period is the number of rows to print in fg_color/bg_color before changing to bg_color/fg_color;
          if col_mode is:
             0: no colorization is applied;
             1: text color alternates between fg/bg and bg/fg every alt_period row blocks;
             2: alt_period row blocks are colorized 1st line fg/bg and the rest bg/fg
          color naming is different of termcolor's; we use the following convention which is later converted to termcolor's:
          bright text colors (does not apply to background color) are identified by the uppercase strings: "BLACK", "RED", "GREEN", "YELLOW", "BLUE", "MAGENTA", "CYAN", "WHITE";
          normal intensity colors are identified by the capitalized lowercase strings: "Black", "Red", "Green", "Yellow", "Blue", "Magenta", "Cyan", "White";
          dim intensity colors are identified by the lowercase strings: "black", "red", "green", "yellow", "blue", "magenta", "cyan", "white";
          if numbering is True and a tabular format is chosen, a column holding the row number is prependended to the table;
       """
    
       # let's use the termcolor package wrapper around the ANSI color escape sequences;
       from copy import deepcopy
       from termcolor import colored, cprint
    
       if fg_color[0].isupper() and fg_color[1:].islower():
          # capitalized name: normal intensity;
          fg_color = fg_color.lower()
          attr = []
       elif fg_color.islower():
          # all lowercase name: dim intensity;
          attr = ["dark"]
       elif fg_color.isupper():
          # all uppercase name: bright intensity;
          attr = ["bold"]
          fg_color = fg_color.lower()
       else:
          show(LOG_LEVEL.error, f"unsupported color {fg_color}; it must either be all uppercase or all lowercase or capitalized lowercase")
       if bg_color.isupper():
          bg_color = bg_color.lower()
       elif not bg_color.islower():
          show(LOG_LEVEL.error, f"unsupported color {bg_color}; it must either be all uppercase or all lowercase")
    
       # remap black to termcolor's grey;
       if "black" == fg_color:
          fg_color = "grey"
       if "black" == bg_color:
          bg_color = "grey"
    
       bg_color = "on_" + bg_color
       color_current_block = 0
    
       max_counter_digits = 7
    
       def colorization(index):
          nonlocal color_current_block, ind
          if 0 == col_mode:
             return "", "", []
          elif 1 == col_mode:
             #1: fg/bg every alt_period rows then switch to bg/fg for alt_period rows, then back again;
             if 0 == index % alt_period: 
                color_current_block = (color_current_block + 1) % 2
             return fg_color, bg_color, attr + ["reverse"] if 0 == color_current_block % 2 else attr
          else:
             #2: fg/bg as first line of every alt_period rows, then bg/fg;
             return fg_color, bg_color, attr if 0 == index % alt_period else attr + ["reverse"]
    
       def rows_to_stdout(rows, no_color = False):
          """
             print the list of dictionaries in rows in tabular format using the parent function's parameters;
             the first column hold the row number; we don't expect more than 10^max_counter_digits - 1 rows; if more and numbering is True, the table will look distorted, just increase max_counter_digits;
          """
          btruncate = "truncate" == mode
          ellipsis = "..."
          for i, row in enumerate(rows):
             # preserve the original data as they may be referenced elsewhere;
             row = deepcopy(row)
             # hack to keep history of printed rows...;
             col_fg, col_bg, col_attr = colorization(max(ind,i)) if 0 != col_mode and not no_color else ("white", "on_grey", [])
             while True:
                left_over = ""
                line = ""
                nb_fields = len(row)
                pos = 0
                for k,v in row.items():
                   nb_fields -= 1
                   Min = max(column_width, len(ellipsis)) if btruncate else column_width
    
                   # extract the next piece of the column and pad it with blanks to fill the width if needed;
                   if isinstance(v, list):
                      # process repeating attributes;
                      columnS = "{: <{width}}".format(v[0][:Min] if v else "", width = column_width if not (0 == pos and numbering) else max_counter_digits)
                      restColumn = btruncate and v and len(v[0]) > Min
                   else:
                      columnS = "{: <{width}}".format(v[:Min], width = column_width if not (0 == pos and numbering) else max_counter_digits)
                      restColumn = btruncate and v and len(v) > Min
                   if restColumn:
                      columnS = columnS[ : len(columnS) - len(ellipsis)] + ellipsis
    
                   # cell content colored only vs. the whole line;
                   #line += ("|  " if frame else "") + colored(columnS, col_fg, col_bg, col_attr) + ("  " if frame else ("  " if nb_fields > 0 else ""))
                   line += colored(("|  " if frame else "") + columnS + ("  " if frame or nb_fields > 0 else ""), col_fg, col_bg, col_attr)
    
                   if isinstance(v, list):
                      # process repeating attributes;
                      restS = v[0][Min : ] if v else ""
                      if restS:
                         v[0] = restS
                      elif v:
                         # next repeating value;
                         v.pop(0)
                         restS = v[0] if v else ""
                   else:
                      restS = v[Min : ]
                      row[k] = v[Min : ]
                   left_over += "{: <{width}}".format(restS, width = column_width if not (0 == pos and numbering) else max_counter_digits)
                   pos += 1
                # cell content colored only vs. the whole line;
                #print(line + ("|" if frame else ""))
                print(line + colored("|" if frame else "", col_fg, col_bg, col_attr))
                left_over = left_over.rstrip(" ")
                if not left_over or btruncate:
                   break
    
       def print_frame_line(nb_columns, column_width = 20):
          line = ""
          while nb_columns > 0:
             line += "+" + "{:-<{width}}".format('', width = (column_width if not (1 == nb_columns and numbering) else max_counter_digits) + 2 + 2)
             nb_columns -= 1
          line += "+"
          print(line)
          return line
    
       # result_to_stdout;
       try:
          if not format:
             # no output is requested;
             return
          if "json" != format and "table" != format:
             raise dmException(origin = "result_to_stdout", message = "format must be either json or table")
          if "wrap" != mode and "truncate" != mode:
             raise dmException(origin = "result_to_stdout", message = "invalid mode; mode must be either wrap or truncate")
          if "json" == format:
             for r in result:
                print(json.dumps(r, indent = 3))
          else:
             for ind, r in enumerate(result):
                # print the rows in result set or list one at a time;
                if 0 == ind:
                   # print the column headers once;
                   # print the frame's top line;
                   frame_line = print_frame_line(len(r[0]), column_width)
                   rows_to_stdout([{k:k for k,v in r[0].items()}], no_color = True)
                   print(frame_line)
                rows_to_stdout(r)
             # print the frame's bottom line;
             print(frame_line)
       except dmException as dme:
          show(LOG_LEVEL.error, dme)
    
    def select_to_stdout(session, dql_stmt, format = "table", column_width = 20, mode = "wrap", frame = True, fg_color = "BLACK", bg_color = "white", alt_period = 5, col_mode = 2, numbering = False):
       """
       execute in session session the DQL SELECT statement passed in dql_stmt and sends the properly formatted result to stdout;
       if format == "json", json.dumps() is invoked for each document;
       if format == "table", document is output in tabular format;
       """
       result = co_select(session, dql_stmt, numbering)
       result_to_stdout(result, format, column_width, mode, frame, fg_color, bg_color, alt_period, col_mode, numbering)
    
    def paginate(cursor, initial_page_size, max_cache_size = MAX_CACHE_SIZE):
       """
       Takes the generator cursor and returns a closure handle that allows to move forwards and backwards in the result set it is bound to;
       a closure is used here so a context is preserved between calls (an alternate implementation could use a co-routine or a class);
       returns None if the result set is empty;
       rows are returned as an array of dictionaries;
       i.e. if the page size (in rows) is negative, the cursor goes back that many rows, otherwise it moves forwards;
       pages can be resized by passing a new page_size to the handle;
       use a page size of 0 to close the cursor;
       Usage:
              cursor = co_select(session, dql_stmt)
              handle = paginate(cursor, max_cache_size = 1000)
              # paginate forwards 50 rows:
              handle(50)
              # paginate backwards 50 rows:
              handle(-50)
              # change page_size to 50 rows while moving forward 20 rows;
              handle(20, 50)
              # close cursor;
              handle(0)
              cursor.send(0)
       the rows from the result set that have been fetched so far are kept in cache so that they can be returned when paginating back;
       the cache is automatically extended when paginating forwards; it is never emptied so it can be heavy on memory if the result set is very large and the forwards pagination goes very far into it;
       the cache has a settable max_cache_size limit with default MAX_CACHE_SIZE;
       """
       cache = []
       # current cache'size in rows;
       cache_size = 0
    
       # initialize current_page_size, it can change later;
       current_page_size = initial_page_size
    
       # index in cached result_set of first and last rows in page;
       current_top = current_bottom = -1
    
       # start the generator;
       # one row will be in the cache before even starting paginating and this is taken into account later;
       cache = next(cursor)
       if cache:
          current_top = current_bottom = 0
          cache_size = 1
       else:
          return None
    
       def move_window(increment, page_size = None):
          nonlocal cache, cache_size, current_top, current_bottom, current_page_size
          if page_size is None:
             # work-around the default parameter value being fixed at definition time...
             page_size = current_page_size
          # save the new page size in case it has changed;
          current_page_size = page_size
          if increment > 0:
             # forwards pagination;
             if current_bottom + increment + 1 > cache_size:
                # "page fault": must fetch the missing rows to complete the requested page size;
                if current_bottom + increment > max_cache_size:
                   # the cache size limit has been reached;
                   # note that the above formula does not always reflect reality, i.e. if less rows are returned that asked for because the result set's end has been reached;
                   # in such cases, page_size will be adjusted to fit max_cache_size;
                   show(LOG_LEVEL.info, f"in cache_logic, maximum allowed cache size of {max_cache_size} reached")
                   increment = max_cache_size - current_bottom
                delta = increment if cache_size > 1 else increment - 1 # because of the starting one row in cache;
                cache += cursor.send(delta)
                cache_size += delta # len(cache)
                current_bottom += delta
             else:
                current_bottom += increment
             current_top = max(0, current_bottom - page_size + 1)
             return cache[current_top : current_bottom + 1]
          elif increment < 0:
             # backwards pagination;
             increment = abs(increment)
             current_top = max(0, current_top - increment)
             current_bottom = min(cache_size, current_top + page_size) - 1
             return cache[current_top : current_bottom + 1]
          else:
             # increment is 0: close the generator;
             # must trap the strange exception after the send();
             try:
                cursor.send(0)
             except:
                pass
             return None
       return move_window
    
    def paginate_to_stdout(session, dql_stmt, page_size = 20, format = "table", column_width = 20, mode = "wrap", frame = True, fg_color = "BLACK", bg_color = "white", alt_period = 5, col_mode = 2, numbering = False):
       """
          execute the dql statement dql_stmt in session session and output the result set in json or table format; if a tabular format is chosen, page_size is the maximum number of rows displayed at once;
          returns a handle to request the next pages or navigate backwards;
          example of usage:
                  h = paginate_to_stdout(s, "select r_object_id, object_name, r_version_label from dm_document")
                  if h:
                     # start the generator;
                     next(h)
                     # navigate the result set;
                     # paginate forwards 10 rows;
                     h.send(10)
                     # paginate forwards 20 rows;
                     h.send(20)
                     # paginate backwards 15 rows;
                     h.send(-15)
                     # close the handle; 
                     h.send(0)
    
       """
       try:
          q = co_select(session, dql_stmt, numbering)
          if not q:
             return None
          handle = paginate(q, page_size)
          while True:
             values = yield handle
             nb_rows = values[0] if isinstance(values, tuple) else values
             new_page_size = values[1] if isinstance(values, tuple) and len(values) > 1 else None
             if new_page_size:
                page_size = new_page_size
             if nb_rows is None:
                # default value is 1;
                nb_rows = 1
             if 0 == nb_rows:
                # exit request;
                break
             result_to_stdout([handle(nb_rows, page_size)], format, column_width, mode, frame, fg_color, bg_color, alt_period, col_mode, numbering)
       except Exception as e:
          show(LOG_LEVEL.error, e)
    
    def describe(session, dm_type, is_type = True, format = "table", column_width = 20, mode = "wrap", frame = True, fg_color = "WHITE", bg_color = "BLACK", alt_period = 5, col_mode = 2):
       """
       describe dm_type as a type if is_type is True, as a registered table otherwise;
       optionally displays the output into a table or json if format is not None;
       returns the output of api's describe verb or None if an error occured;
       """
       show(LOG_LEVEL.debug, f"in describe(), dm_type={dm_type}")
       try:
          dump_str = dmAPIGet(f"describe,{session},{'type' if is_type else 'table'},{dm_type}")
          if dump_str is None:
             raise dmException(origin = "describe()", message = f"bad parameter {dm_type}")
          s = [{"attribute": l[0], "type": l[1]} for l in [i.split() for i in dump_str.split("\n")[5:-1]]]
          if format:
             result_to_stdout([s], format, column_width, mode, frame, fg_color, bg_color, alt_period, col_mode)
       except dmException as dme:
          show(LOG_LEVEL.error, dme)
          show(LOG_LEVEL.error, dmAPIGet(f"getmessage,{session}").rstrip())
       finally:
          show(LOG_LEVEL.debug, "exiting describe()")
          return dump_str
    
    def disconnect(session):
       """
       closes the given session;
       returns True if no error, False otherwise;
       """
       show(LOG_LEVEL.debug, "in disconnect()")
       try:
          status = dmAPIExec("disconnect," + session)
       except Exception as e:
          show(LOG_LEVEL.error, "exception in disconnect():")
          show(LOG_LEVEL.error, e)
          if LOG_LEVEL.log_level > LOG_LEVEL.error: traceback.print_stack()
          status = False
       finally:
          show(LOG_LEVEL.debug, "exiting disconnect()")
          return status
    
    # call module initialization;
    dmInit()
    

    Some comments

    A few comments are in order. I´ll skip the ctypes part because it was already presented in the original blog.
    On line 39, class LOG_LEVEL is being defined to encapsulate the verbosity levels, and the current one, of the error messages. Levels are inclusive of lesser ones; set LOG_LEVEL.log_level to LOG_LEVEL.no_log to turn off error messages. Default verbosity level is error, which means that only error messages are output, not debugging messages such as on function entry and exit.
    On line 55, class dmException defines the custom exception used to raise Documentum errors, e.g. on lines 189 and 190. The linked-in C library libdmcl.so does not raise exceptions, their calls just return a TRUE or FALSE status (non-zero or zero value). The interface remaps those values to True or False, or sometimes None. Default exception Exception is still handled (e.g. on lines 83 and 92), more so for uniformity reason rather than out of real necessity, although it cannot be totally excluded that ctypes can raise some exception of it own under some circumstances. else and finally clauses are frequently used to remap the status or result value, return it, and cleaning up. Line 64 defines how the custom exception will be printed: it simply prints its instanciation parameters.
    One line 235, function co_select() is defined. This is really the main function of the whole interface. Its purpose is to execute a SELECT DQL statement and return the rows on-demand, rather than into one potentially large in-memory list of dictionaries (reminder: pythons lists are respectively equivalent to arrays, and dictionaries to records or hashes, or associative arrays in other languages). On line 316, the yield statement makes this possible; it is this statement that turns a traditional, unsuspecting function into a generator or coroutine (this distinction is really python stuff, conceptually the function is a coroutine). Here, yield works both ways: it returns a row, which makes the function a generator, but can also optionally accept a number of rows to return at once, and 0 to stop the generator, which makes it a coroutine. On line 341, the exception handler´s finally clause closes the collection and, on line 348, returns the residual rows that were fetched but not returned yet because the end of the collection was reached and the yield statement was not executed.
    One of the biggest pros of generators, in addition to saving memory, is to separate the navigation into the result set from the processing of the received data. Low-level, dirty technical details are therefore segregated into their own function out of the way of high-level data processing, resulting in a clearer and less distracting code.
    Note the function’s numbering parameter: when True, returned rows are numbered starting at 1. It looks like this feature was not really necessary because a SELECT statement could just include a (pseudo-)column such as ROWNUM (for Oracle RDBMS) or a sequence, that would be treated as any other column but things are not so easy. Interfacing a sequence to a registered table, and resetting it before usage, is possible but quite complicated and needs to be done at the database level, which causes it to be not portable; besides, gaps in the sequence were observed, even when nocache was specified.
    One line 352, the function select_to_dict() is defined for those cases where it still makes sense to hold a whole result set in memory at once. It does almost nothing, as the bulk of the work is done by co_select(). Line 359 executes a list comprehension that takes the generator returned by co_select() and forces it to be iterated until it meets its stop condition.
    Skipping to line 519, function select_to_stdout() is another application of co_select(). This time, the received generator is passed to function result_to_stdout() defined on line 361; this function exemplifies outputting the data in a useful manner: it displays them to stdout either in json through the imported json library, or in tabular format. It can be used elsewhere each time such a presentation is sensible, e.g. from function describe() below, just make sure that the data are passed as a singleton list of a list of dictionaries (i.e. a list whose sole element is a list of dictionaries).
    There isn’t much to add about the well-known json format (see an example below) but the tabular presentation is quite rich in functionalities. It implements in python what was presented here and here with the addition of color; the driving goal was to get a readable and comfortable table output containing documents as rows and their attributes as columns. Interactivity can be achieved by piping the output of the function into the less utility, as illustrated below:

    $ pip install termcolor
    $ export PYTHONPATH=/home/dmadmin/dctm-DctmAPI:/home/dmadmin/.local/lib/python3.5/site-packages
    $ cat - < test-table.py 
    #!/usr/bin/python3.6
    import DctmAPI
    
    s = DctmAPI.connect("dmtest73", "dmadmin", "dmadmin")
    DctmAPI.select_to_stdout(s, "select r_object_id, object_name, title, owner_name, subject, r_version_label from dm_document", format = "table", column_width = 30, mode = "wrap", frame = True, fg_color = "YELLOW", bg_color = "BLACK", alt_period = 5, col_mode = 2)
    eot
    $ chmod +x test-table.py
    $ ./test-table.py | less -R
    

    Result:

    Here, a tabular (format = “table”, use format = “json” for json output) representation of the data returned by the DQL statement has been requested with 30 character-wide columns (column_width = 30); attributes too large to fit in their column are wrapped around; they could have been truncated by setting mode = “truncate”. A frame à la mysql or postgresql has been requested with frame = True. Rows colorization has been requested with the first line every 5 rows (alt_period = 5) in reverse color yellow on black and the others in black on yellow (col_mode = 2; use col_mode = 1 for alt_period lines large alternating colored fg/bg bg/fg blocks, and col_mode = 0 for no colorization).
    The simple but very effective termcolor ANSI library is used here, which is a real relief compared to having to reimplement one myself for the 2nd or 3rd time in my life…
    Note the use of the less command with the -R option so ANSI color escape sequences are passed through to the terminal and correctly rendered.
    As a by-product, let’s generalize the snippet above into an independent, reusable utility:

    $ cat test-table.py
    #!/usr/bin/python3.6 
    import argparse
    import DctmAPI
                    
    if __name__ == '__main__':
        parser = argparse.ArgumentParser()
        parser.add_argument('-d', '--docbase', action='store',
                            default='dmtest73', type=str,
                            nargs='?',
                            help='repository name [default: dmtest73]')
        parser.add_argument('-u', '--user_name', action='store',
                            default='dmadmin',
                            nargs='?',
                            help='user name [default: dmadmin]')
        parser.add_argument('-p', '--password', action='store',
                            default='dmadmin',
                            nargs='?',
                            help='user password [default: "dmadmin"]')
        parser.add_argument('-q', '--dql_stmt', action='store',
                            nargs='?',
                            help='DQL SELECT statement')
        args = parser.parse_args()
                
        session = DctmAPI.connect(args.docbase, args.user_name, args.password)
        if session is None:
           print(f"no session opened in docbase {args.docbase} as user {args.user_name}, exiting ...")
           exit(1)
    
        DctmAPI.select_to_stdout(session, args.dql_stmt, format = "table", column_width = 30, mode = "wrap", frame = True, fg_color = "YELLOW", bg_color = "BLACK", alt_period = 5, col_mode = 2)
    
    # make it self-executable;
    $ chmod +x test-table.py
    
    # test it !
    $ ./test-table.py -q "select r_object_id, object_name, title, owner_name, subject, r_version_label from dm_document" | less -R
    
    # ship it !
    # nah, kidding.
    

    For completeness, here is an example of a json output:

    s = DctmAPI.connect("dmtest73", "dmadmin", "dmadmin")
    DctmAPI.select_to_stdout(s, "select r_object_id, object_name, r_version_label from dm_document", format = "json")
    [
       {
          "r_object_id": "0900c350800001d0",
          "object_name": "Default Signature Page Template",
          "r_version_label": [
             "CURRENT",
             "1.0"
          ]
       }
    ]
    ...
    [
       {
          "r_object_id": "0900c350800001da",
          "object_name": "Blank PowerPoint Pre-3.0 Presentation",
          "r_version_label": [
             "CURRENT",
             "1.0"
          ]
       }
    ]
    

    Note the embedded list for the repeating attribute r_version_label; unlike relational tables, the json format suits perfectly well documents from repositories. It is not ready to support Documentum’s object-relational model but it is close enough. Maybe one day, once hell has frozen over -;), we’ll see a NOSQL implementation of Documentum, but I digress.
    Back to the code, on line 528 function paginate() is defined. This function allows to navigate a result set forwards and backwards into a table; the latter is possible by caching (more exactly, saving, as the data are cumulative and never replaced), the rows received so far. As parameters, it takes a cursor for the opened collection, a page size and the maximum cache size. In order to preserve its context, e.g. the cache and the pointers to the first and last rows displayed from the result set, the function’s chosen implementation is that of a closure, with the inner function move_window() returned to the caller as a handle. Alternative implementations could be a class or a co-routine again. move_windows() requests the rows from the cursor via send(nb_rows) as previously explained and returns them as a list. A negative nb_rows means to navigate backwards, i.e. the requested rows are returned from the cache instead of the cursor. Obviously, as the cache is dynamically extended up to the specified size and its content never released to make room for the new rows, if one paginates to the bottom of a very large result set, a lot of memory can still be consumed because the whole result set finishes up in memory. A more conservative implementation could get rid of older rows to accomodate the new ones but at the cost of a reduced history depth, so it’s a trade-off; anyway, this subject is out of scope.
    As its usage protocol may not by that simple at first, an example function paginate_to_stdout() is defined as a co-routine starting on line 613, with the same parameters as in select_to_stdout(). It can be used as follows:

    # connect to the repository;
    s = DctmAPI.connect("dmtest73", "dmadmin", "dmadmin")
    
    # demonstration of DctmAPI.paginate_to_stdout();
    # request a pagination handle to the result set returned for the SELECT dql query below;
    h = DctmAPI.paginate_to_stdout(s, "select r_object_id, object_name, title, owner_name, subject, r_version_label from dm_document", page_size = 5, format = "table", column_width = 30, mode = "wrap", frame = True, fg_color = "RED", bg_color = "black", alt_period = 5, col_mode = 1, numbering = True)  
    
    print("starting the generator")
    next(h)
    
    nb_rows = 3
    print(f"\nnext {nb_rows} rows")
    h.send(nb_rows)
    
    nb_rows = 10
    print(f"\nnext {nb_rows} rows")
    h.send(nb_rows)
    
    nb_rows = 5
    print(f"\nnext {nb_rows} rows and page_size incremented to 10")
    h.send((nb_rows, 10))
    
    nb_rows = 10
    print(f"\nnext {nb_rows} row")
    h.send(nb_rows)
    
    nb_rows = -4 
    print(f"\nprevious {nb_rows} rows")
    h.send(nb_rows)
    
    nb_rows = 12 
    print(f"\nnext {nb_rows} rows and page_size decremented to 6")
    h.send((nb_rows, 6))
    
    nb_rows = -10 
    print(f"\nprevious {nb_rows} rows")
    h.send(nb_rows)
    
    print(f"exiting ...")
    try:
       h.send(0)
    except:
       # trap the StopIteration exception;
       pass
    sys.exit()
    

    Here, each call to send() results in a table being displayed with the requested rows, as illustrated below:


    Note how send() takes either a scalar or a tuple as parameter; when the page size needs to be changed, a tuple including the new page size is passed to the closure which processes it to extract its values (line 640 and 641). It is a bit convoluted but it is a limitation of the send() function: as it takes only one parameter, they must be packed into a collection if they are more than one.
    The snippet above could be generalized to a stand-alone interactive program that reads from the keyboard a number of rows as an offset to move backwards or forwards, if saving the whole result set into a disk file is too expensive and only a few pages are requested, but DQL has the limiting clause enable(return_top N) for this purpose. so such an utility is not really useful.
    On line 654, the describe() function returns as-is the result of the eponymous api verb, i.e. as a raw string with each item delimited by an end of line character (‘\n’character under Linux) for further processing by the caller; optionally, it can also output it as a table or as a json literal by taking profit of the function result_to_stdout() and passing it the data that were appropriately formatted on line 667 as a list of one list of dictionaries.
    Here are two examples of outputs.

    s = DctmAPI.connect("dmtest73", "dmadmin", "dmadmin")
    desc_str = DctmAPI.describe(s, "dm_document", format = "json")
    # json format:
    

    s = DctmAPI.connect("dmtest73", "dmadmin", "dmadmin")
    desc_str = DctmAPI.describe(s, "dm_document")
    # Tabular format:
    


    Finally, on line 693, the module is automatically initialized at load time.

    Conclusion

    The python language has quite evolved from v2 to v3, the latest as of this writing being 3.9. Each version brings a few small, visible enhancements; an example of which are the formatting f’strings (no pun intended), which were used here. Unfortunately, they need python 3.6 minimum, which breaks compatibility with previous releases; fortunately, they can be easily replaced with older syntax alternatives if need be.
    As usual, the DctmAPI does not pretend to be the best python interface to Documentum ever. It has been summarily tested and bugs could still be lurking around. I know, there are lots of improvements and functionalities possible, e.g. displaying acls and users and groups, maybe wrapping the module into classes, using more pythonic constructs, to name but a few. So, feel free to add your comments, corrections and suggestions below. They will all be taken into consideration and maybe implemented too if interesting enough. In the meantime, take care of yourself and your family. Happy New Year to everyone !

    Cet article DctmAPI.py revisited est apparu en premier sur Blog dbi services.

    dctmping, A Documentum Repository Checker Utility

    $
    0
    0

    When working with containers, the usual trend is to make them as compact as possible by removing any file or executable that is not used by what is deployed in it. Typically, interactive commands are good candidates not to be installed. Sometimes, this trend is so extreme that even simple utilities such as ping or even the less pager are missing from the containers. This is generally fine once a container reaches the production stage but not so much while what’s deployed in it is still in development and has not reached its full maturity yet. Without such essentials utilities, one might as well be blind and deaf.
    Recently, I wanted to check if a containerized D2 installation could reach its target repository but could not find any of the usual command-line tools that are included with the content server, such as iapi or dctmbroker. Bummer, I thought, but less politely. Admittedly, those tools belong to the content server binaries and not to the WDK clients such as D2 and DA, so that makes sense. Maybe, but that does not solve my problem.
    In article A Small Footprint Docker Container with Documentum command-line Tools, I showed how to have a minimalist installation of the iapi, idql and dmdocbroker command-line tools that could be containerized but for my current need, I don’t need so much power. A bare repository ping, a dctmping if you will, would be enough. Clearly, as D2 is a WDK application, and WDK applications are DFCs applications, a simple DFCs-based java utility would be just what the doctor ordered. There are probably hundreds variants of such a basic utility floating on the Web but here is my take on it anyway.

    The code

    The following listing generates the dctmping.java program.

    $ cat - << dctmping.java
    // a healthcheck program to check whether all the repositories whose docbroker's hosts are defined in dfc.properties are reachable;
    // cec at dbi-services.com, June 201;
    // to compile: export CLASSPATH=/path/to/the/dfc.jar javac dctmping.java
    // to execute: java -classpath .:/path/to/dfc/config:\$CLASSPATH dctmping [target_docbase user_name password]
    // if target_docbase is given, the programm will attempt to connect to it using the command-line parameters user_name and password and run a SELECT query;
     
    import com.documentum.fc.client.IDfClient;
    import com.documentum.fc.client.DfClient;
    import com.documentum.fc.client.DfQuery;
    import com.documentum.fc.client.IDfCollection;
    import com.documentum.fc.client.IDfDocbaseMap;
    import com.documentum.fc.client.IDfQuery;
    import com.documentum.fc.client.IDfSession;
    import com.documentum.fc.client.IDfSessionManager;
    import com.documentum.fc.common.DfLoginInfo;
    import com.documentum.fc.common.IDfLoginInfo;
    import com.documentum.fc.client.IDfTypedObject;
    import java.io.RandomAccessFile;
     
    public class dctmping {
       IDfSessionManager sessMgr = null;
       IDfSession idfSession;
     
       public void showDFC_properties_file() throws Exception {
          System.out.println("in showDFC_properties_file");
          IDfClient client = DfClient.getLocalClient();
          IDfTypedObject apiConfig = client.getClientConfig();
          String[] values = apiConfig.getString("dfc.config.file").split(":");
          System.out.printf("dfc.properties file found in %s\n\n", values[1]);
          RandomAccessFile f_in = new RandomAccessFile(values[1], "r");
          String s;
          while ((s = f_in.readLine()) != null) {
             System.out.println(s);
          }
          f_in.close();
       }
     
       public void getAllDocbases() throws Exception {
          System.out.printf("%nin getAllDocbases%n");
          IDfClient client = DfClient.getLocalClient();
          IDfDocbaseMap docbaseMap = client.getDocbaseMap();
          for (int i = 0; i < docbaseMap.getDocbaseCount(); i++) {
              System.out.println("Docbase Name : " + docbaseMap.getDocbaseName(i));
              System.out.println("Docbase Desc : " + docbaseMap.getDocbaseDescription(i));
          }
       }
     
       public void getDfSession(String repo, String user, String passwd) throws Exception {
          System.out.printf("%nin getDfSession%n");
          IDfLoginInfo login = new DfLoginInfo();
          login.setUser(user);
          login.setPassword(passwd);
          IDfClient client = DfClient.getLocalClient();
          sessMgr = client.newSessionManager();
          sessMgr.setIdentity(repo, login);
          idfSession = sessMgr.getSession(repo);
          if (idfSession != null)
              System.out.printf("Session created successfully in repository %s as user %s\n", repo, user);
          else
             throw new Exception();
       }
     
       public void releaseSession() throws Exception {
          sessMgr.release(idfSession);
       }
     
       public void API_select(String repo, String dql) throws Exception {
          System.out.printf("%nin API_select%s%n", repo);
          System.out.printf("SELECT-ing in repository %s\n", repo);
          System.out.printf("Query is: %s\n", dql);
          IDfQuery query = new DfQuery();
          query.setDQL(dql);
          IDfCollection collection = null;
          String r_object_id = null;
          String object_name = null;
          final int max_listed = 20;
          int count = 0;
          System.out.printf("%-16s  %s\n", "r_object_id", "object_name");
          System.out.printf("%-16s  %s\n", "----------------", "-----------");
          try {
             collection = query.execute(idfSession, IDfQuery.DF_READ_QUERY);
             while (collection.next()) {
                count++;
                if (max_listed == count) {
                   System.out.printf("... max %d reached, skipping until end of result set ...\n", max_listed);
                   continue;
                }
                else if (count > max_listed)
                   continue;
                r_object_id = collection.getString("r_object_id");
                object_name = collection.getString("object_name");
                System.out.printf("%-16s  %s\n", r_object_id, object_name);
             }
             System.out.printf("%d documents founds\n", count);
          } finally {
             if (collection != null) {
                collection.close();
             }
          }
       }
      
       public static void main(String[] args) throws Exception {
          dctmping dmtest = new dctmping();
          dmtest.showDFC_properties_file();
          dmtest.getAllDocbases();
          String docbase;
          String user;
          String passwd;
          if (0 == args.length || args.length > 3) {
             System.out.println("\nUsage: dctmping [target_docbase [user_name [password]]]");
             System.exit(1);
          }
          if (1 == args.length) {
             docbase = args[0];
             user    = "dmadmin";
             passwd  = "trusted:no_password_needed";
          } 
          else if (2 == args.length) {
             docbase = args[0];
             user    = args[1];
             passwd  = "trusted:no_password_needed";
          } 
          else {
             docbase = args[0];
             user    = args[1];
             passwd  = args[2];
          } 
    
          String[] queries = {"SELECT r_object_id, object_name from dm_document where folder('/System/Sysadmin/Reports', descend);",
                              "SELECT r_object_id, object_name from dm_cabinet;"};
          for (String dql_stmt: queries) {
             try {
                dmtest.getDfSession(docbase, user, passwd);
                dmtest.API_select(docbase, dql_stmt);
             }
             catch(Exception exception) {
                System.out.printf("Error while attempting to run DQl query", docbase, user);
                exception.printStackTrace();
             }
             finally {
                try {
                   dmtest.releaseSession();
                }
                catch(Exception exception) {}
             }
          }
       }
    }
    eoj
    

    Compile it as instructed in the header, e.g.:

    export DOCUMENTUM=/home/dmadmin/documentum
    export CLASSPATH=$DOCUMENTUM/shared/dfc/dfc.jar
    javac dctmping.java
    

    Use the command below to execute it:

    java -classpath .:$DOCUMENTUM/shared/config:$CLASSPATH dctmping [target_docbase [user_name [password]]]
    

    When no parameter is given on the command-line, dctmping attempts to reach the dfc.properties file and displays its content if it succeeds. This proves that the file is accessible though one of the paths in $CLASSPATH.
    It then displays the help message:

    $ java -classpath .:$DOCUMENTUM/shared/config:$CLASSPATH dctmping
    in showDFC_properties_file
    dfc.properties file found in /home/dmadmin/documentum/shared/config/dfc.properties
    
    dfc.data.dir=/home/dmadmin/documentum/shared
    dfc.tokenstorage.dir=/home/dmadmin/documentum/shared/apptoken
    dfc.tokenstorage.enable=false
    
    dfc.docbroker.host[0]=dmtest.cec
    dfc.docbroker.port[0]=1489
    
    in getAllDocbases
    Docbase Name : dmtest73
    Docbase Desc : a v7.3 test repository
    
    Usage: dctmping [target_docbase [user_name [password]]]
    

    Starting on line 12, a list of all the reachable docbases is given, here only one, dmtest73. In some installations, this list be more populated, e.g.:

    Docbase Name : global_repository
    Docbase Desc : Global Repository
    Docbase Name : SERAC
    Docbase Desc : SERAC CTLQ Repository
    Docbase Name : CARAC
    Docbase Desc : CARAC CTLQ Repository
    

    At least one parameter is needed: the repository name. The user default to “dmadmin”. A password must be given when connecting remotely from the content server. If connecting locally, it can be anything as the user is trusted, or even left empty.

    A Real Case Application

    To check a remote repository’s access, all 3 parameters are required, e.g.:

    $ java -classpath .:$CLASSPATH dctmping dmtest73 dmadmin my_password
    in showDFC_properties_file
    dfc.properties file found in /home/dmadmin/documentum/shared/config/dfc.properties
    
    dfc.data.dir=/home/dmadmin/documentum/shared
    dfc.tokenstorage.dir=/home/dmadmin/documentum/shared/apptoken
    dfc.tokenstorage.enable=false
    
    dfc.docbroker.host[0]=dmtest.cec
    dfc.docbroker.port[0]=1489
    
    in getAllDocbases
    Docbase Name : dmtest73
    Docbase Desc : a v7.3 test repository
    
    in getDfSession
    Session created successfully in repository dmtest73 as user dmadmin
    
    in API_selectdmtest73
    SELECT-ing in repository dmtest73
    Query is: SELECT r_object_id, object_name from dm_document where folder('/System/Sysadmin/Reports', descend);
    r_object_id       object_name
    ----------------  -----------
    0900c35080002a1b  StateOfDocbase
    0900c350800029d7  UpdateStats
    0900c35080002a11  ContentWarning
    0900c35080002a18  DBWarning
    0900c3508000211e  ConsistencyChecker
    5 documents founds
    
    in getDfSession
    Session created successfully in repository dmtest73 as user dmadmin
    
    in API_selectdmtest73
    SELECT-ing in repository dmtest73
    Query is: SELECT r_object_id, object_name from dm_cabinet;
    r_object_id       object_name
    ----------------  -----------
    0c00c35080000107  Temp
    0c00c3508000012f  Templates
    0c00c3508000057b  dm_bof_registry
    0c00c350800001ba  Integration
    0c00c35080000105  dmadmin
    0c00c35080000104  dmtest73
    0c00c35080000106  System
    0c00c35080000130  Resources
    8 documents founds
    

    After displaying the content of the dfc.properties file, dctmping attempts to connect to the given repository with the given credentials and runs both queries below:

    SELECT r_object_id, object_name from dm_document where folder('/System/Sysadmin/Reports', descend);
    SELECT r_object_id, object_name from dm_cabinet;
    

    As we don’t need a full listing for testing the connectivity, especially when it may be very large and take several minutes to complete, a maximum of 20 rows by query is returned.

    Checking the global_registry

    If a global_registry repository has been defined in the dfc.properties file, the credentials to access it are also listed in that file as illustrated below:

    ...
    dfc.globalregistry.repository=my_global_registry
    dfc.globalregistry.username=dm_bof_registry
    dfc.globalregistry.password=AFAIKL8C2Y/2gRyQUV1R7pmP7hfBDpafeWPST9KKlQRtZVJ4Ya0MhLsEZKmWr1ok9+oThA==
    ...
    

    Eventhough the password is encrypted, it is available verbatim to connect to the repository as shown below:

    $ java -classpath .:$DOCUMENTUM/shared/config:$CLASSPATH dctmping dm_bof_registry 'AFAIKL8C2Y/2gRyQUV1R7pmP7hfBDpafeWPST9KKlQRtZVJ4Ya0MhLsEZKmWr1ok9+oThA=='
    ...
    in getDfSession
    Session created successfully in repository my_global_registry as user dm_bof_registry
     
    in API_selectmy_global_registry
    SELECT-ing in repository my_global_registry
    Query is: SELECT r_object_id, object_name from dm_cabinet;
    r_object_id       object_name
    ----------------  -----------
    0c0f476f80000106  System
    0c0f476f800005b4  dm_bof_registry
    2 documents founds
    ...
    

    Conclusion

    This innocent little stand-alone utility can be easily included in any docker image and can assist in ascertaining at least 5 dependencies:

    1. the correct installation of the DFCs
    2. the correct setup of the dfc.properties
    3. the correct initialization of $DOCUMENTUM and $CLASSPATH environment variables
    4. the accessibility of the remote repository
    5. and finally, the credentials to connect to said repository.

    That is one big chunk of pre-requisites to check at once even before the WDK clients starts.
    As a bonus, it also lists all the repositories potentially accessible through the docbrokers listed in the dfc.properties, which is useful in case the local machine hosts a mutualized service serving several repositories. Another freebie is that, if a global_registry is defined in the dfc.properties file, it can be checked too; actually, the utility could be improved to do that automatically as the needed credentials are given in that file and require no user interaction but, as they say, let’s leave that as an exercise for the reader.

    Cet article dctmping, A Documentum Repository Checker Utility est apparu en premier sur Blog dbi services.

    Connecting to Repositories with the Same Name and/or ID

    $
    0
    0

    A rare but recurrent issue that customers sometimes encounter is to how to connect to each one of distinct repositories with the same name or same docbase id, or even both if one repository is a clone of the other one. The present connection resolution technique based on the dfc.properties file does not support this and only lets one connect to the first matching repository found. Well knowing this limitation, why were they created with the same name in the first place ? Just don’t and that pitfall is avoided. Actually however, this situation makes sense when the repositories are created by different teams, e.g. one TEST repository for each application in development, or developers’ personal repositories, maybe local on their laptops, or an application’s repository existing in different stages of its lifecycle, e.g. a PRODUCT_CATALOG repository in DEV, TUNI, INT, CTLQ, ACC, PROD, ARCHIVED, and maybe CLONE_1, CLONE_2, etc…
    Now, at some point, a developer would need to access simultaneously, say, PRODUCT_CATALOG in DEV and PRODUCT_CATALOG in PROD in order to troubleshoot a problem, a plausible scenario. Another scenario is when a common, DFCs-based service must access docbases with the same name coming from different, independently managed applications. So, how to do that ?

    Yes, how to do that ?

    In this article, I presented a possible solution based on editing the dfc.properties file on-the-fly prior to connecting, so its docbroker_host and docbroker_port parameters would point to the target repository’s docbroker. As you know, that file is required to be able to connect to repositories. It is read by the DFCs when a connection request is done from a DFCs client, and the docbrokers listed in there are queried in turn until one of them replies with the information for the requested target that projects to it.
    The problem with this algorithm is that it is not selective enough to tell apart distinct repositories with the same name. If the target could be specified like docbase[.server_instance][@machine[:port]], that could work. Actually, the above syntax, minus the [:port] part, is accepted but it does not work in our case. A typical returned error is:

    DfNoServersException:: THREAD: main; MSG: [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error: "The DocBroker running on host (null:0) does not know of a server for the specified docbase (dmtest73@192.168.56.12)"; ERRORCODE: 100; NEXT: null

    The hack described in the aforementioned article consisted in removing all the docbroker_host and docbroker_port pairs of parameters and inserting the exact one that is used by the repository of interest, so any ambiguity is lifted. Prior to the next connection, the work-around is repeated for the new target.
    Wrappers around the command-line tools iapi and idql, wiapi respectively widql, do just that.
    It works quite well for these tools but what about java DFCs clients ? I guess we could subclass the session manager’s getSession() or write a wrapper that applies the same work-around transparently and that was my intention at first when a customer raised the issue but I decided to do give OpenText’s knowledge base another try, in case an out of-the-box yet unknown parameter would solve this question with no programming needed, i.e. no customization of existing code. I was not left empty-handed as I found this article: “How can I dynamically switch between different Docbrokers using DFC?”, Article ID:KB8801087. Here is the full note:

    How can I dynamically switch between different Docbrokers using DFC?
    
    Article ID:KB8801087
    
    Add FavoriteEmailNotifyPrint
    
    Applies to
    
    Documentum Foundation Classes 4.0
    
    Summary
    
    How can I dynamically switch between different Docbrokers using DFC?
    
    Resolution
    Normally, you are restricted to Docbases registered to theDocbroker specified in the DMCL.ini file. Modifying the apiConfigobject can still access Docbases that are not registered with thisDocbroker.
    
    The steps involved are the following:
    1.Get a session object for a docbase which is registered withthe docbroker specified in the dmcl.ini file.
    2.Get an apiConfig object using the methodIDfSession.getClientConfig();
    3.Set the 'primary_host" and "primary_port" attributes ofthis apiConfig object to a different docbroker.
    4.Get a session for a different docbase registered with thisdocbroker.
    
    For more details, please refer to the APIconfig object in theobject reference manual for Documentum 4i.
    Legacy Article ID
    ECD316940
    

    It’s again a bit hacky and still requires a customization but at least it addresses the DFCs part. Although it is DFCs-oriented, as a proof of concept, let’s see first if we could make it work using the API from within iapi.

    Testing from within iapi

    In our test case, we have a docbase named dmtest73 with id 50000 projecting to docbroker host dmtest.cec on port 1489 and a different docbase with the same name dmtest73 but id 90005 projecting to the docbroker host docker on port 7489. Both docbrokers are present in the dfc.properties file. We want to be able to connect to each of those docbases. Let’s first try the normal, default connection:

    $ iapi dmtest73 -Udmadmin -Pdmadmin
    API> retrieve,c,dm_server_config
    dump,c,l
    USER ATTRIBUTES
    
      object_name                     : dmtest73
    ...
      owner_name                      : dmtest73
    ...
      acl_domain                      : dmtest73
    ...
      operator_name                   : dmtest73
    ...
      web_server_loc                  : dmtest.cec
    ...
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_server_config
      r_creation_date                 : 7/1/2019 19:26:18
      r_modify_date                   : 6/20/2021 23:55:33
    ...
      r_creator_name                  : dmtest73
    ...
      r_server_version                : 7.3.0000.0214  Linux64.Oracle
      r_host_name                     : dmtest.cec
      r_process_id                    : 908
      r_install_owner                 : dmadmin
    ...
    
    # dump the docbase config too;
    API> retrieve,s0,dm_docbase_config    
    ...
    3c00c35080000103
    API> dump,c,l
    ...
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : a v7.3 test repository
    ...
      owner_name                      : dmtest73
    ...
      acl_domain                      : dmtest73
    ...
      index_store                     : DM_DMTEST73_INDEX
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_docbase_config
      r_creation_date                 : 7/1/2019 19:26:18
      r_modify_date                   : 7/1/2019 17:33:50
    ...
      r_creator_name                  : dmtest73
    ...
      r_dbms_name                     : Oracle
      r_docbase_id                    : 50000
    # we are in the docbase with id 50000;
    

    So, this is the dmtest73 docbase that is accessed with the current docbroker definitions in dfc.properties.
    Usually, only that docbase, with FQN dmtest73@dmtest.cec:1489, is reachable because its docbroker is listed earlier than dmtest73@docker:7489 in dfc.properties, as reported by the session’s apiconfig object:

    dump,c,apiconfig
    ...
      dfc.docbroker.exclude.failure_th: 3
      dfc.docbroker.exclusion.time    : 30
      dfc.docbroker.host           [0]: dmtest.cec
                                   [1]: dmtest.cec
                                   [2]: docker
                                   [3]: docker
                                   [4]: docker
                                   [5]: docker
      dfc.docbroker.port           [0]: 7289
                                   [1]: 1489
                                   [2]: 1489
                                   [3]: 1589
                                   [4]: 7489
                                   [5]: 6489
      dfc.docbroker.protocol       [0]: rpc_static
                                   [1]: rpc_static
                                   [2]: rpc_static
                                   [3]: rpc_static
                                   [4]: rpc_static
                                   [5]: rpc_static
      dfc.docbroker.search_order      : sequential
      dfc.docbroker.service        [0]: dmdocbroker
                                   [1]: dmdocbroker
                                   [2]: dmdocbroker
                                   [3]: dmdocbroker
                                   [4]: dmdocbroker
                                   [5]: dmdocbroker
      dfc.docbroker.timeout        [0]: 0
                                   [1]: 0
                                   [2]: 0
                                   [3]: 0
                                   [4]: 0
                                   [5]: 0
    ...
    

    Here, 6 pairs of docbroker_host/docbroker_port were defined in the dfc.properties; each one has additional default parameters dfc.docbroker.protocol, dfc.docbroker.service and dfc.docbroker.timeout; they are all synchronized, i.e.
    dfc.docbroker.host[i] uses port dfc.docbroker.port[i], dfc.docbroker.protocol[i], dfc.docbroker.service[i] with a timeout of dfc.docbroker.timeout[i].
    The Note says to force the primary docbroker (i.e. the first one, corresponding to the values at index 0 of the dfc.docbroker% attributes) to the target docbase’s one; it doesn’t say anything about the other ones but we’ll remove them to eradicate any possibility of a fail over logic:

    # empty the dfc.docbroker% attributes;
    API> truncate,c,apiconfig,dfc.docbroker.host
    truncate,c,apiconfig,dfc.docbroker.port
    truncate,c,apiconfig,dfc.docbroker.protocol
    truncate,c,apiconfig,dfc.docbroker.service
    truncate,c,apiconfig,dfc.docbroker.timeout
    
    # add the target docbase's docbroker;
    API> append,c,apiconfig,dfc.docbroker.host
    docker
    append,c,apiconfig,dfc.docbroker.port
    7489
    append,c,apiconfig,dfc.docbroker.protocol
    rpc_static
    append,c,apiconfig,dfc.docbroker.service
    dmdocbroker
    append,c,apiconfig,dfc.docbroker.timeout
    0
    
    # verify the parameters;
    API> dump,c,apiconfig
    ...
      dfc.docbroker.host           [0]: docker
      dfc.docbroker.port           [0]: 7489
      dfc.docbroker.protocol       [0]: rpc_static
      dfc.docbroker.search_order      : sequential
      dfc.docbroker.service        [0]: dmdocbroker
      dfc.docbroker.timeout        [0]: 0
    ...
    
    # try to connect to dmtest73@docker:7489 now; 
    API> connect,dmtest73,dmadmin,dmadmin
    ...
    s1
    # OK but where are we really ?
    API> retrieve,s1,dm_server_config
    ...
    3d015f9580000102
    API> dump,c,l
    USER ATTRIBUTES
    
      object_name                     : dmtest73
    ...
      owner_name                      : dmtest73c
    ...
      acl_domain                      : dmtest73c
    ...
      operator_name                   : dmtest73c
    ...
      web_server_loc                  : container73
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_server_config
      r_creation_date                 : 6/20/2021 02:52:36
      r_modify_date                   : 6/20/2021 00:59:43
    ...
      r_creator_name                  : dmtest73c
    ...
      r_server_version                : 16.4.0000.0248  Linux64.Oracle
      r_host_name                     : container73
      r_process_id                    : 13709
    ...
    
    # dump the docbase config too;
    API> retrieve,s1,dm_docbase_config
    ...
    3c015f9580000103
    API> dump,c,l
    ...
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : dmtest73 homonym silently
    ...
    ...
      acl_domain                      : dmtest73c
    ...
      index_store                     : dm_dmtest73c_index
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_docbase_config
      r_creation_date                 : 6/20/2021 02:52:36
      r_modify_date                   : 6/20/2021 01:27:22
    ...
    ...
      r_dbms_name                     : Oracle
      r_docbase_id                    : 90005
    ...
    # we are now in the docbase with id 90005;
    

    It works as proved by comparing the dm_server_config and dm_docbase_config objects, notably the dm_docbase_config.r_docbase_id.
    So, what to think of such a solution ? It requires some cutting and pasting of statements but they could be saved into some sort of macros depending on the command-line client used (e.g. the GUI-based dqman). For iapi I’d definitively prefer starting it as described in the above article, e.g.

    $ wiapi dmtest73:docker:7289
    $ widql dmtest73:dmtest.cec:1489

    but if several connections to homonym docbases must be opened at once in the same iapi working session, this is the only way.

    The DFCs version

    The solution from the Note:KB8801087 was for the DFCs in the first place. Let’s apply it in function getSessionExtended() (starting on line 46 in the listing below) with the following profile:

    public void getSessionExtended(String repo, String user, String passwd) throws Exception
    

    The function takes a docbase’s specification in repo, and a user name and password to connect to the repository. As we need a unambiguous syntax to define repo, let’s use the time proven one:

    docbase_name[:docbroker_machine[:docbroker_port]]

    where docbroker_machine:docbroker_port are the coordinates of the docbroker, possibly remote, the docbase docbase_name projects to. Obviously, two homonym docbases cannot project to the same docbroker (the docbroker would reject them all but the first one), but they could be running on the same machine and project to distinct docbrokers, locally or remotely.
    An alternate syntax could be:

    docbase_name@docbroker_machine:docbroker_port

    If preferred, the change is easy in the function’s regular expression (see line 51 below):

    Pattern re_docbase_docbroker_host_docbroker_port = Pattern.compile("^([^:]+)(@([^:]+)(:([^:]+))?)?$");

    The function will first parse the extended repository syntax (starting on line 53), apply the Note’s hack (lines 81 to 87), and finally connect (line 96). To be sure we are in the right docbase, all the configuration objects will be dumped by the ancillary function dump_current_configs() (defined starting on line 124, called on line 182) as well. Starting on line 111, the apiconfig object is displayed to check the hack’s change. On lines 102 to 108, we find out the minimum field width to display the apiconfig’s attributes without truncation, as the dump() method does a lame job here.
    If no docbroker is specified, then the function fails back to the DFCs default behavior so it remains backwards compatible.
    In order to make the test possible, a main() function (starting on line 157) accepting and parsing command-line parameters (starting on line 167) is also provided.

    // A test program for function getSessionExtended() to open sessions using an extended syntax;
    // June 2021, cec@dbi-services.com;
    // to compile: javac extendedConnection.java
    // to execute: java -classpath .:$DOCUMENTUM/config:$CLASSPATH extendedConnection target_docbase user_name password
    // the programm will attempt to connect to target_docbase using those credentials;
    // target_docbase's syntax is:
    //   docbase_name[:docbroker_machine[:docbroker_port]]
    // if docbroker_machine is missing (and hence docbroker_port too), a session is opened through the default behavior from the dfc.properties file, i.e. following the order the docbrokers are listed in that file;
    // otherwise, the docbrokers listed in dfc.properties are not used and the specified docbroker is;
    // so the function is compatible with the default behavior of the DFCs;
    import com.documentum.fc.client.IDfClient;
    import com.documentum.fc.client.DfClient;
    import com.documentum.fc.client.DfQuery;
    import com.documentum.fc.client.IDfCollection;
    import com.documentum.fc.client.IDfDocbaseMap;
    import com.documentum.fc.client.IDfQuery;
    import com.documentum.fc.client.IDfSession;
    import com.documentum.fc.client.IDfSessionManager;
    import com.documentum.fc.common.DfLoginInfo;
    import com.documentum.fc.common.IDfLoginInfo;
    import com.documentum.fc.common.IDfAttr;
    import com.documentum.fc.client.IDfTypedObject;
    import com.documentum.fc.client.IDfPersistentObject;
    import java.io.RandomAccessFile;
    import java.util.regex.Pattern;
    import java.util.regex.Matcher;
    import java.util.Enumeration;
    
    public class extendedConnection {
       String gr_username = null;
       String gr_password = null;
       IDfSessionManager sessMgr = null;
       IDfSession idfSession;
     
       public void usage() {
       // output the utility extended repository's syntax;
          System.out.println("Usage:");
          System.out.println("   java [-Ddfc.properties.file=/tmp/dfc.properties] -classpath .:/app/dctm/config:$CLASSPATH docbase[:docbroker-host[:docbroker-port]] username password");
          System.out.println("Use the extended repository syntax (e.g. dmtest:docker:7289, vs. dmtest) to override the DFCs' default resolution mechanism.");
          System.out.println("Examples:");
          System.out.println("   java -classpath .:/app/dctm/config:$CLASSPATH dctmping dmtest73 dmadmin dmadmin");
          System.out.println("for using the docbrokers defined in the dfc.properties (classic usage)");
          System.out.println("   java -classpath .:/tmp:$CLASSPATH dctmping dmtest73:docker:7489 dmadmin dmadmin");
          System.out.println("for short-circuiting the docbrokers and using the repo's extended syntax");
       }
       public void getSessionExtended(String repo, String user, String passwd) throws Exception {
       // change the dfc.docbroker.host and dfc.docbroker.port to connect with more flexibility;
       // The target repository repo is defined though the following advanced syntax:
       //    docbase[:docbroker-host[:docbroker-port]]
          System.out.printf("getSessionExtended%n");
          Pattern re_docbase_docbroker_host_docbroker_port = Pattern.compile("^([^:]+)(:([^:]+)(:([^:]+))?)?$");
    
          Matcher check = re_docbase_docbroker_host_docbroker_port.matcher(repo);
          String docbase = null;
          String docbroker_host = null;
          String docbroker_port = null;
          if (check.find()) {
             docbase = check.group(1);
             docbroker_host = check.group(3);
             docbroker_port = check.group(5);
          } 
          else {
             System.out.println("Missing docbase name; the docbase is mandatory");
             usage();
          }
          if (docbroker_host != null) {
             System.out.println("host = " + docbroker_host);
             if (docbroker_port == null)
                docbroker_port = "1489";
             System.out.println("port = " + docbroker_port);
          } 
          else
             System.out.println("docbroker host is empty, using the dfc.properties");
          System.out.println("using the " + (docbroker_host != null ? (" docbroker host " + docbroker_host + ":" + docbroker_port) : "the dfc.properties"));
    
          IDfClient client = DfClient.getLocalClient();
          IDfTypedObject client_config = client.getClientConfig();
    
          if (docbroker_host != null) {
             // let's hack the session config to force the given docbroker[:port];
             client_config.truncate("dfc.docbroker.host", 0);
             client_config.appendString("dfc.docbroker.host", docbroker_host);
             client_config.truncate("dfc.docbroker.port", 0);
             client_config.appendString("dfc.docbroker.port", docbroker_port);
             client_config.truncate("dfc.docbroker.protocol", 1);
             client_config.truncate("dfc.docbroker.service", 1);
             client_config.truncate("dfc.docbroker.timeout", 1);
          }
    
          IDfLoginInfo login = new DfLoginInfo();
          login.setUser(user);
          login.setPassword(passwd);
          sessMgr = client.newSessionManager();
          sessMgr = client.newSessionManager();
          sessMgr.setIdentity(docbase, login);
          idfSession = sessMgr.getSession(docbase);
    
          System.out.printf("session config:%n");
          int max_length = 0;
          // as the default presentation from dump() sucks too much due to the truncated attribut names, let's produce a non-truncated one:
          // first, iterate through the session config and find the longest one;
          for (Enumeration e = client_config.enumAttrs(); e.hasMoreElements() ;) {
             IDfAttr attr = (IDfAttr) e.nextElement();
             String name = attr.getName();
             String value = client_config.getString(name);
             if (null != value)
                max_length = max_length >= name.length() ? max_length : name.length();
          }
          // max_length contains now the length of the longest attribute name;
          // display the nicely formatted session config;
          for (Enumeration e = client_config.enumAttrs(); e.hasMoreElements() ;) {
             IDfAttr attr = (IDfAttr) e.nextElement();
             String name = attr.getName();
             String value = client_config.getAllRepeatingStrings(name, "\n" + String.join("", new String(new char[max_length]).replace("\0", " ") + "  "));
             System.out.printf("%" + max_length + "s: %s%n", name, value);
          }
       }
     
       public void releaseSession() throws Exception {
       // quite obvious;
          sessMgr.release(idfSession);
       }
     
       public void dump_all_configs() throws Exception {
       // dump all the server and docbase configs defined in repository;
          System.out.printf("%nin dump_all_configs%n");
          String[] configs = {"select r_object_id from dm_server_config",
                              "select r_object_id from dm_docbase_config"};
          IDfQuery query = new DfQuery();
          for (String dql_stmt: configs) {
             System.out.println("executing " + dql_stmt);
             query.setDQL(dql_stmt);
             IDfCollection collection = null;
             String r_object_id = null;
             try {
                collection = query.execute(idfSession, IDfQuery.DF_READ_QUERY);
                while (collection.next()) {
                   r_object_id = collection.getString("r_object_id");
                   IDfPersistentObject obj = idfSession.getObjectByQualification("dm_sysobject where r_object_id = '" + r_object_id + "'");
                   System.out.println("dumping object with id = " + r_object_id);
                   System.out.println(obj.dump());
                }
             }
             catch(Exception e) {
                System.out.printf("Error in dumps_all_configs()%n");
                System.out.println(e.getMessage());
                e.printStackTrace();
                // continue as far as possible;
             }
             finally {
                if (collection != null)
                   collection.close();
             }
          }
       }
    
       public static void main(String[] args) throws Exception {
          System.out.printf("%nextendedConnection started ...%n");
          extendedConnection dmtest = new extendedConnection();
          if (0 == args.length)
             System.exit(0);
    
          String docbase = null;
          String user    = null;
          String passwd  = null;
          // if any arguments are present, they must be the target docbase and the credentials to connect (username and password);
          if (args.length != 3) {
             System.out.println("Missing arguments. Usage: dctmping [target_docbase [user_name password]]");
             System.exit(1);
          }
          else {
             docbase = args[0];
             user    = args[1];
             passwd  = args[2];
          }
    
          try {
             // connect using the command-line parameters;
             dmtest.getSessionExtended(docbase, user, passwd);
    
             //dump all the server and docbase configs;
             dmtest.dump_all_configs();
          }
          catch(Exception e) {
             System.out.printf("Error while working in the docbase %s as user %s\n", docbase, user);
             System.out.println(e.getMessage());
             e.printStackTrace();
          }
          finally {
             try {
                dmtest.releaseSession();
             }
             catch(Exception e) {}
          }
       }
    }
    

    Interestingly, the DFCs’ truncate method has an additional argument compared to its API’s equivalent, the index at which truncation should start. Also, using the DFCs, no opened session is needed prior to accessing the config temporary object, named here client config vs. API’s apiconfig.
    Save this code into a file named extendedConnection.java in current directory.
    To compile it:

    javac extendedConnection.java
    

    To execute it from current directory (paths may vary according to your installation):

    export DOCUMENTUM=/app/dctm
    export CLASSPATH=$DOCUMENTUM/dctm.jar:$DOCUMENTUM/dfc/dfc.jar
    $ java -classpath .:$DOCUMENTUM/config:$CLASSPATH extendedConnection target_docbase user_name password
    

    Example of output

    Here is an example of the default behavior, i.e. when the target repo does not use the extended syntax repo:docbroker_host[:docbroker_port]:

    $ java -classpath .:/app/dctm/config:$CLASSPATH extendedConnection dmtest73 dmadmin dmadmin
    
    extendedConnection started ...
    getSessionExtended
    docbroker host is empty, using the dfc.properties
    using the the dfc.properties
    session config:
                                                      dfc.name: dfc
                                                dfc.config.dir: /app/dctm/config
                                               dfc.config.file: file:/app/dctm/config/dfc.properties
    ...
                                            dfc.docbroker.host: 
                                                                192.168.56.12
                                                                
                                                                192.168.56.15
                                                                192.168.56.15
                                                                192.168.56.15
                                            dfc.docbroker.port: 0
                                                                1489
                                                                0
                                                                7489
                                                                6489
                                                                1489
    ...
                                        dfc.docbroker.protocol: rpc_static
                                                                rpc_static
                                                                rpc_static
                                                                rpc_static
                                                                rpc_static
                                                                rpc_static
                                         dfc.docbroker.service: dmdocbroker
                                                                dmdocbroker
                                                                dmdocbroker
                                                                dmdocbroker
                                                                dmdocbroker
                                                                dmdocbroker
                                         dfc.docbroker.timeout: 0
                                                                0
                                                                0
                                                                0
                                                                0
                                                                0
    ...
    
    in dump_all_configs
    executing select r_object_id from dm_server_config
    dumping object with id = 3d00c35080000102
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : 
      subject                         : 
    ...
      owner_name                      : dmtest73
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_server_config
      r_creation_date                 : 7/1/2019 7:26:18 PM
    ...
      r_creator_name                  : dmtest73
    ...
      r_server_version                : 7.3.0000.0214  Linux64.Oracle
      r_host_name                     : dmtest.cec
    ...
    
    executing select r_object_id from dm_docbase_config
    dumping object with id = 3c00c35080000103
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : a v7.3 test repository
    ...
      acl_domain                      : dmtest73
    ...
      index_store                     : DM_DMTEST73_INDEX
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_docbase_config
      r_creation_date                 : 7/1/2019 7:26:18 PM
    ...
      r_creator_name                  : dmtest73
    ...
      r_docbase_id                    : 50000
    ...
    

    This is the dmtest73 docbase with id 50000, like in the iapi example seen before.
    Make sure $CLASSPATH includes the dfc.jar, or dctm.jar if going through the manifest in it is preferred.
    Next, an example of accessing a docbase with the same name as previously but on another host, using the extended syntax:

    $ java -classpath .:/app/dctm/config:$CLASSPATH extendedConnection dmtest73:docker:7489 dmadmin dmadmin
    
    extendedConnection started ...
    getSessionExtended
    host = docker
    port = 7489
    using the  docbroker host docker:7489
    session config:
                                                      dfc.name: dfc
                                                dfc.config.dir: /app/dctm/config
                                               dfc.config.file: file:/app/dctm/config/dfc.properties
    ...
                                            dfc.docbroker.host: docker
                                            dfc.docbroker.port: 7489
    ...
                                        dfc.docbroker.protocol: rpc_static
                                         dfc.docbroker.service: dmdocbroker
                                         dfc.docbroker.timeout: 0
    ...
    
    in dump_all_configs
    executing select r_object_id from dm_server_config
    dumping object with id = 3d015f9580000102
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : 
      subject                         : 
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_server_config
      r_creation_date                 : 6/20/2021 2:52:36 AM
    ...
      r_creator_name                  : dmtest73c
    ...
      r_server_version                : 16.4.0000.0248  Linux64.Oracle
      r_host_name                     : container73
    ...
    
    executing select r_object_id from dm_docbase_config
    dumping object with id = 3c015f9580000103
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : dmtest73 homonym silently
    ...
      acl_domain                      : dmtest73c
    ...
      index_store                     : dm_dmtest73c_index
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_docbase_config
      r_creation_date                 : 6/20/2021 2:52:36 AM
    ...
      r_creator_name                  : dmtest73c
    ...
      r_docbase_id                    : 90005
    ...
    
    

    The reached docbase has the 90005, as expected.
    As a final example, let’s connect to the default dmtest73 again but this time using the extended syntax:

    $ java -classpath .:/app/dctm/config:$CLASSPATH dctmping dmtest73:dmtest.cec:1489 dmadmin dmadmin
    
    extendedConnection started ...
    
    getSessionExtended
    host = dmtest.cec
    port = 1489
    using the  docbroker host dmtest.cec:1489
    session config:
                                                      dfc.name: dfc
                                                dfc.config.dir: /app/dctm/config
                                               dfc.config.file: file:/app/dctm/config/dfc.properties
    ...
                                      dfc.docbroker.debug.host: 
                                      dfc.docbroker.debug.port: 0
    ..
                                            dfc.docbroker.host: dmtest.cec
                                            dfc.docbroker.port: 1489
    ..
                                        dfc.docbroker.protocol: rpc_static
                                         dfc.docbroker.service: dmdocbroker
                                         dfc.docbroker.timeout: 0
    ...
    
    in dump_all_configs
    executing select r_object_id from dm_server_config
    dumping object with id = 3d00c35080000102
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : 
      subject                         : 
    ...
      operator_name                   : dmtest73
    ...
      web_server_loc                  : dmtest.cec
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_server_config
      r_creation_date                 : 7/1/2019 7:26:18 PM
    ...
      r_creator_name                  : dmtest73
    ...
      r_server_version                : 7.3.0000.0214  Linux64.Oracle
      r_host_name                     : dmtest.cec
    ...
    
    executing select r_object_id from dm_docbase_config
    dumping object with id = 3c00c35080000103
    USER ATTRIBUTES
    
      object_name                     : dmtest73
      title                           : a v7.3 test repository
    ...
      acl_domain                      : dmtest73
    ...
      index_store                     : DM_DMTEST73_INDEX
    ...
    
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_docbase_config
      r_creation_date                 : 7/1/2019 7:26:18 PM
    ...
      r_creator_name                  : dmtest73
    ...
      r_docbase_id                    : 50000
    ...
    

    The reached docbase has id 50000 as expected.
    The client config shows here the same output as sessionconfig from within iapi, which shouldn’t come as a surprise given that iapi invokes the DFCs behind the scenes.
    As with the alternative in the aforementionned article, this work-around even allows to completely remove the docbroker_host/docbroker_port pairs from the dfc.properties file as it does not use them and imposes its own direct, flattened resolution mechanism (“just contact the damn given docbroker”). This can be verified with a stripped-down dfc.properties file (e.g. /tmp/dfc.properties) with no such entries and invoking the test program thusly:

    $ java -Ddfc.properties.file=/tmp/dfc.properties -classpath .:$CLASSPATH extendedConnection dmtest73:docker:7489 dmadmin dmadmin
    

    Of course, in such a case, the default resolution mechanism won’t work if attempted to be used through the normal repository syntax, and the DFCs will complain with an error as shown below:

    $ java -Ddfc.properties.file=/tmp/dfc.properties -classpath .:$CLASSPATH extendedConnection dmtest73 dmadmin dmadmin
    
    extendedConnection started ...
    getSessionExtended
    docbroker host is empty, using the dfc.properties
    using the the dfc.properties
    Error while working in the docbase dmtest73 as user dmadmin
    [DM_DOCBROKER_E_NO_DOCBROKERS]error:  "No DocBrokers are configured"
    DfServiceException:: THREAD: main; MSG: [DM_DOCBROKER_E_NO_DOCBROKERS]error:  "No DocBrokers are configured"; ERRORCODE: 100; NEXT: null
    	at com.documentum.fc.client.DfServiceException.newNoDocbrokersException(DfServiceException.java:44)
    ...
    

    Conclusion

    Although this work-around smells like a hack, it is a very effective one. From within iapi or any DFCs client with a slightly smarter connection function, this recurrent yet pesky limitation has finally found a solution. However, while it is acceptable for in-house development where source code is available, third-party developers might not want to bother with an ad hoc customization to fix a niche problem; so some per$ua$ion work might be needed here too. Let’s hope that OpenText, in breaking with a 30 years-old code immobilism in this area, will delight us soon with a really transparent solution which, who knows, supports an enhanced repository syntax such as the proposed one.

    Cet article Connecting to Repositories with the Same Name and/or ID est apparu en premier sur Blog dbi services.

    Documentum – dmqdocbroker/iapi/idql not working because of dbor.properties.lck

    $
    0
    0

    Have you ever faced an issue where dmqdocbroker, iapi, idql and the likes aren’t able to communicate at all with any Docbroker (connection broker)? Here, I’m not talking about potentially wrong hostname, port or connect modes, which might prevent you to reach a Docbroker if it’s not configured properly because this will still most likely reply to you with an error message… I’m really talking about the utility/binaries that cannot communicate anymore, it’s like all messages are sent to the void and nothing will ever respond (is that a black-hole I’m seeing?)!

    Earlier this month, I suddenly had this behavior at one of our customer on two out of dozens of Documentum Servers. Everything seemed to be up&running, all the processes were there:

    [dmadmin@cs-0 ~]$ ps -ef
    UID      PID PPID C  STIME TTY       TIME CMD
    dmadmin 7005    1 0  14:11 ?     00:00:00 ./dmdocbroker -port 1489 -init_file $DOCUMENTUM/dba/Docbroker.ini
    dmadmin 7014    1 0  14:11 ?     00:00:00 ./dmdocbroker -port 1487 -init_file $DOCUMENTUM/dba/DocbrokerExt.ini
    dmadmin 7077    1 0  14:11 ?     00:00:07 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7087    1 0  14:11 ?     00:00:07 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 7100 7077 0  14:11 ?     00:00:00 $DM_HOME/bin/mthdsvr master 0xfd7308a8, 0x7f9f93d81000, 0x223000 1000726 5 7077 GR_REPO $DOCUMENTUM/dba/log
    dmadmin 7101 7100 0  14:11 ?     00:00:04 $DM_HOME/bin/mthdsvr worker 0xfd7308a8, 0x7f9f93d81000, 0x223000 1000726 5 0 GR_REPO $DOCUMENTUM/dba/log
    dmadmin 7102 7087 0  14:11 ?     00:00:00 $DM_HOME/bin/mthdsvr master 0xfd7308be, 0x7fe2fe3ac000, 0x223000 1000727 5 7087 REPO1 $DOCUMENTUM/dba/log
    dmadmin 7121 7102 0  14:11 ?     00:00:03 $DM_HOME/bin/mthdsvr worker 0xfd7308be, 0x7fe2fe3ac000, 0x223000 1000727 5 0 REPO1 $DOCUMENTUM/dba/log
    dmadmin 7122 7100 0  14:11 ?     00:00:03 $DM_HOME/bin/mthdsvr worker 0xfd7308a8, 0x7f9f93d81000, 0x223000 1000726 5 1 GR_REPO $DOCUMENTUM/dba/log
    dmadmin 7123 7077 0  14:11 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7124 7077 0  14:11 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7144 7102 0  14:11 ?     00:00:04 $DM_HOME/bin/mthdsvr worker 0xfd7308be, 0x7fe2fe3ac000, 0x223000 1000727 5 1 REPO1 $DOCUMENTUM/dba/log
    dmadmin 7148 7087 0  14:11 ?     00:00:00 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 7149 7087 0  14:11 ?     00:00:00 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 7165 7100 0  14:11 ?     00:00:04 $DM_HOME/bin/mthdsvr worker 0xfd7308a8, 0x7f9f93d81000, 0x223000 1000726 5 2 GR_REPO $DOCUMENTUM/dba/log
    dmadmin 7166 7077 0  14:11 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7167 7102 0  14:11 ?     00:00:03 $DM_HOME/bin/mthdsvr worker 0xfd7308be, 0x7fe2fe3ac000, 0x223000 1000727 5 2 REPO1 $DOCUMENTUM/dba/log
    dmadmin 7168 7087 0  14:11 ?     00:00:00 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 7169 7100 0  14:11 ?     00:00:04 $DM_HOME/bin/mthdsvr worker 0xfd7308a8, 0x7f9f93d81000, 0x223000 1000726 5 3 GR_REPO $DOCUMENTUM/dba/log
    dmadmin 7187 7077 0  14:11 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7190 7102 0  14:11 ?     00:00:03 $DM_HOME/bin/mthdsvr worker 0xfd7308be, 0x7fe2fe3ac000, 0x223000 1000727 5 3 REPO1 $DOCUMENTUM/dba/log
    dmadmin 7194 7087 0  14:11 ?     00:00:00 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 7210 7100 0  14:11 ?     00:00:03 $DM_HOME/bin/mthdsvr worker 0xfd7308a8, 0x7f9f93d81000, 0x223000 1000726 5 4 GR_REPO $DOCUMENTUM/dba/log
    dmadmin 7213 7077 0  14:11 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7215 7102 0  14:11 ?     00:00:04 $DM_HOME/bin/mthdsvr worker 0xfd7308be, 0x7fe2fe3ac000, 0x223000 1000727 5 4 REPO1 $DOCUMENTUM/dba/log
    dmadmin 7225 7087 0  14:11 ?     00:00:00 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 7334    1 0  14:12 ?     00:00:00 /bin/sh $JMS_HOME/server/startMethodServer.sh
    dmadmin 7336 7334 0  14:12 ?     00:00:00 /bin/sh $JMS_HOME/bin/standalone.sh
    dmadmin 7447 7336 21 14:12 ?     00:02:57 $JAVA_HOME/bin/java -D[Standalone] -server -XX:+UseCompressedOops -server -XX:+UseCompressedOops -Xms8g -Xmx8g -XX:MaxMetaspaceSize=512m -XX
    dmadmin 7695 7077 0  14:12 ?     00:00:04 ./dm_agent_exec -enable_ha_setup 1 -docbase_name GR_REPO.GR_REPO -docbase_owner dmadmin -sleep_duration 0
    dmadmin 7698 7087 0  14:12 ?     00:00:04 ./dm_agent_exec -enable_ha_setup 1 -docbase_name REPO1.REPO1 -docbase_owner dmadmin -sleep_duration 0
    dmadmin 7908 7077 0  14:13 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 7918 7087 0  14:13 ?     00:00:00 ./documentum -docbase_name REPO1 -security acl -init_file $DOCUMENTUM/dba/config/REPO1/server.ini
    dmadmin 8269 7077 0  14:21 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 8270 7077 0  14:21 ?     00:00:00 ./documentum -docbase_name GR_REPO -security acl -init_file $DOCUMENTUM/dba/config/GR_REPO/server.ini
    dmadmin 8327 6370 0  14:27 pts/1 00:00:00 ps -ef
    [dmadmin@cs-0 ~]$

     

    However, I could see the communication issues by looking at the Repository log because it would show that the AgentExec was actually not connected, even after almost 20 minutes:

    [dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba/log/REPO1/agentexec/
    [dmadmin@cs-0 agentexec]$ date
    Mon Apr 12 14:29:03 UTC 2021
    [dmadmin@cs-0 agentexec]$
    [dmadmin@cs-0 agentexec]$ tail -8 ../../REPO1.log
    2021-04-12T14:11:56.453407      7087[7087]      0000000000000000        [DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 7194, session 010f12345000000c) is started sucessfully."
    2021-04-12T14:11:57.455899      7087[7087]      0000000000000000        [DM_SERVER_I_START]info:  "Sending Initial Docbroker check-point "
    
    2021-04-12T14:11:57.547764      7087[7087]      0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 7225, session 010f123450000456) is started sucessfully."
    2021-04-12T14:11:58.348442      7223[7223]      010f123450000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (cs-0.domain.com) with port (1490).  Information: (Config(REPO1), Proximity(1), Status(Open), Dormancy Status(Active))."
    2021-04-12T14:11:58.661666      7223[7223]      010f123450000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (cs-0.domain.com) with port (1488).  Information: (Config(REPO1), Proximity(1), Status(Open), Dormancy Status(Active))."
    2021-04-12T14:11:58.959490      7223[7223]      010f123450000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (cs-1.domain.com) with port (1490).  Information: (Config(REPO1), Proximity(2), Status(Open), Dormancy Status(Active))."
    Mon Apr 12 14:12:55 2021 [INFORMATION] [AGENTEXEC 7698] Detected during program initialization: Version: 16.4.0200.0256  Linux64
    [dmadmin@cs-0 agentexec]$
    [dmadmin@cs-0 agentexec]$ # Previous startup from the AgentExec logs showing that it didn't start yet
    [dmadmin@cs-0 agentexec]$ tail -2 agentexec.log
    Sat Apr 10 19:20:30 2021 [INFORMATION] [LAUNCHER 23135] Detected during program initialization: Version: 16.4.0200.0256 Linux64
    Sun Apr 11 19:20:26 2021 [INFORMATION] [LAUNCHER 2890] Detected during program initialization: Version: 16.4.0200.0256 Linux64
    [dmadmin@cs-0 agentexec]$

     

    The interesting part is that the Repositories have all been started properly and projected to the Docbroker. However, any client from the Documentum Server locally wouldn’t be able to connect to the Docbroker. Even more interesting, this was actually a HA environment with 2 CS. The Documentum Server hosting the Primary CS (I will call it cs-0) had the issue while the Documentum Server hosting the Remote CS (I will call it cs-1) had no problem. Executing the dmqdocbroker on the cs-0 to ping the Docbroker of the cs-0 never gave a response, however doing the exact same command to ping the Docbroker of the cs-0 but from the cs-1 host did work without any problem and showed the correct projection of the repositories:

    ## on cs-0 (Documentum Server hosting the PCS)
    [dmadmin@cs-0 ~]$ hostname -f
    cs-0.domain.com
    [dmadmin@cs-0 ~]$
    [dmadmin@cs-0 ~]$ time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping
    ^C
    real 0m53.513s
    user 0m6.132s
    sys 0m0.578s
    [dmadmin@cs-0 ~]$
    [dmadmin@cs-0 ~]$ time echo quit | iapi REPO1.REPO1 -Udmadmin -Pxxx
    ^C
    real 0m46.431s
    user 0m6.241s
    sys 0m0.575s
    [dmadmin@cs-0 ~]$
    [dmadmin@cs-0 ~]$ time echo quit | iapi REPO1.cs-1_REPO1 -Udmadmin -Pxxx
    ^C
    real 0m35.694s
    user 0m6.163s
    sys 0m0.582s
    [dmadmin@cs-0 ~]$
    
    ## on cs-1 (Documentum Server hosting the RCS)
    [dmadmin@cs-1 ~]$ hostname -f
    cs-1.domain.com
    [dmadmin@cs-1 ~]$
    [dmadmin@cs-1 ~]$ time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping
    dmqdocbroker: A DocBroker Query Tool
    dmqdocbroker: Documentum Client Library Version: 16.4.0200.0080
    Using specified port: 1489
    Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0200.0256 Linux64).
    
    real 0m3.499s
    user 0m6.950s
    sys 0m0.469s
    [dmadmin@cs-1 ~]$
    [dmadmin@cs-1 ~]$ time echo quit | iapi REPO1.cs-1_REPO1 -Udmadmin -Pxxx
    
    OpenText Documentum iapi - Interactive API interface
    Copyright (c) 2018. OpenText Corporation
    All rights reserved.
    Client Library Release 16.4.0200.0080
    
    Connecting to Server using docbase REPO1.cs-1_REPO1
    [DM_SESSION_I_SESSION_START]info: "Session 010f1234501b635c started for user dmadmin."
    
    Connected to OpenText Documentum Server running Release 16.4.0200.0256 Linux64.Oracle
    Session id is s0
    API> Bye
    
    real 0m5.032s
    user 0m7.401s
    sys 0m0.487s
    [dmadmin@cs-1 ~]$
    [dmadmin@cs-1 ~]$ time echo quit | iapi REPO1.REPO1 -Udmadmin -Pxxx
    
    OpenText Documentum iapi - Interactive API interface
    Copyright (c) 2018. OpenText Corporation
    All rights reserved.
    Client Library Release 16.4.0200.0080
    
    Connecting to Server using docbase REPO1.REPO1
    [DM_SESSION_I_SESSION_START]info: "Session 010f1234501b6506 started for user dmadmin."
    
    Connected to OpenText Documentum Server running Release 16.4.0200.0256 Linux64.Oracle
    Session id is s0
    API> Bye
    
    real 0m5.315s
    user 0m7.976s
    sys 0m0.515s
    [dmadmin@cs-1 ~]$

     

    This shows that the issue isn’t the Docbroker or the Repositories themselves but rather the utility/binaries present on the cs-0 that cannot open communication channels with the local Docbroker, for some reasons… Even after setting debugging on the Docbroker, I could see communications when the dmqdocbroker utility was used on the cs-1 host but nothing was showing-up if the same command was used on the cs-0 host instead. You can enable some logs for the Docbroker by adding “trace=true” into the Docbroker.ini file and you can also add some other traces by setting the following environment variables (value can be 1 or 10 for example) and then restart the Docbroker: “export DM_DOCBROKER_TRACE=1; export DM_DEBUG_BROKER=1; export DM_TRANS_LOG=1“. Additionally, you can also add options to the launch script, just like for the Repository part: “-odocbroker_trace -onettrace_all_option -oxxx“.

    Unfortunately, the dmqdocbroker utility uses the dmawk binary and the iapi/idql are also binaries so it’s rather difficult to debug further without the source code… After some testing/debugging, I found something rather hard to believe… All the binaries of the Documentum Server looked OK, they were no changes done in the past few weeks and the files were the same (same hash) than on the cs-1 for example. As you probably know, dmqdocbroker/iapi/idql will use the dfc.properties from the folder “$DOCUMENTUM_SHARED/config/” (with $DOCUMENTUM_SHARED=$DOCUMENTUM forced, starting in 16.4). Therefore, I have been looking into this folder for anything that might disrupt the proper behavior of the utility/binaries. All the files in this folder were 100% identical between cs-0 and cs-1, except for the encrypted password of the dm_bof_registry as well as the dfc.keystore since both of these are generated once. This would mean that the issue wouldn’t be there, but it was. I started looking into other areas to try to find the root cause but nothing was working. Then, I came back to the config folder and simply tried to empty it… Somehow, the dmqdocbroker was working again, magically! I mean, it printed many errors because the files log4j.properties, dfc.properties and dfc.keystore weren’t there but it replied something… What to do then? Well, I went step by step, putting back the files one by one, as they are supposed to be, and then executing the dmqdocbroker again to see if it stops working.

    The files dfc.properties, log4j.properties, dfcfull.properties, dfc.keystore and all the cache folders were restored properly and the dmqdocbroker was still working without any problem… So what the hell? That’s more or less all of the files, isn’t it? True, that’s all the files, minus the dbor ones: dbor.properties and dbor.properties.lck. At this customer, these files are empty because no configuration was needed. It would be very hard to believe that this could be the issue, right? Well, have a look for yourself:

    [dmadmin@cs-0 ~]$ cd $DOCUMENTUM_SHARED/config/
    [dmadmin@cs-0 config]$ ls -l
    total 140
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26 2020 ServerApps
    drwxr-x--- 9 dmadmin dmadmin  4096 Jul 26 2020 Shared
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26 2020 acs
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26 2020 bpm
    -rwxr-x--- 1 dmadmin dmadmin     0 Jul 22 2020 dbor.properties
    -rw-rw-r-- 1 dmadmin dmadmin     0 Jul 26 2020 dbor.properties.lck
    -rw-rw-r-- 1 dmadmin dmadmin  2152 Jul 26 2020 dfc.keystore
    -rw-rw-r-- 1 dmadmin dmadmin   481 Jul 26 2020 dfc.properties
    -rw-rw---- 1 dmadmin dmadmin    70 Jul 22 2020 dfc.properties.bak.0
    -rwxr-x--- 1 dmadmin dmadmin   242 Jul 26 2020 dfc.properties.bak.1
    -rw-rw-r-- 1 dmadmin dmadmin   271 Jul 26 2020 dfc.properties.bak.2
    -rw-rw-r-- 1 dmadmin dmadmin   323 Jul 26 2020 dfc.properties.bak.3
    -rw-rw-r-- 1 dmadmin dmadmin   481 Jul 26 2020 dfc.properties.bak.4
    -rw-rw-r-- 1 dmadmin dmadmin   482 Jul 26 2020 dfc.properties.bak.5
    -rwxrwx--- 1 dmadmin dmadmin 79268 Jul 22 2020 dfcfull.properties
    -rwxr-x--- 1 dmadmin dmadmin  1242 Jul 26 2020 log4j.properties
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ # With the initial content, dmqdocbroker isn't working
    [dmadmin@cs-0 config]$ date; time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping; date
    Wed Apr 14 07:43:28 UTC 2021
    ^C
    real    0m22.718s
    user    0m6.401s
    sys     0m0.853s
    Wed Apr 14 07:43:51 UTC 2021
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ mkdir test
    [dmadmin@cs-0 config]$ mv * test/
    mv: cannot move 'test' to a subdirectory of itself, 'test/test'
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ ls -l
    total 4
    drwxr-x--- 6 dmadmin dmadmin 4096 Apr 14 07:44 test
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ # With the folder empty, dmqdocbroker is "working" (errors but expected ones)
    [dmadmin@cs-0 config]$ date; time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping; date
    Wed Apr 14 07:45:15 UTC 2021
    0 [main] ERROR com.documentum.fc.common.impl.logging.LoggingConfigurator  - Problem locating log4j configuration
    1 [main] WARN com.documentum.fc.common.impl.logging.LoggingConfigurator  - Using default log4j configuration
    3 [main] ERROR com.documentum.fc.common.impl.preferences.PreferencesManager  - [DFC_PREFERENCE_LOAD_FAILED] Failed to load persistent preferences from null
    java.io.FileNotFoundException: dfc.properties
            at com.documentum.fc.common.impl.preferences.PreferencesManager.locateMainPersistentStore(PreferencesManager.java:378)
            at com.documentum.fc.common.impl.preferences.PreferencesManager.readPersistentProperties(PreferencesManager.java:329)
            at com.documentum.fc.common.impl.preferences.PreferencesManager.<init>(PreferencesManager.java:37)
            ...
    2862 [main] ERROR com.documentum.fc.client.security.impl.IdentityManager  - [DFC_SECURITY_IDENTITY_INIT] no identity initialization or incomplete identity initialization
    DfException:: THREAD: main; MSG: ; ERRORCODE: ff; NEXT: null
            at com.documentum.fc.client.security.impl.JKSKeystoreUtil.creteNewKeystoreFile(JKSKeystoreUtil.java:425)
            at com.documentum.fc.client.security.impl.JKSKeystoreUtil.createNewKeystore(JKSKeystoreUtil.java:209)
            at com.documentum.fc.client.security.impl.DfcIdentityKeystore.applyDfcInitPolicy(DfcIdentityKeystore.java:95)
            ...
    dmqdocbroker: A DocBroker Query Tool
    dmqdocbroker: Documentum Client Library Version: 16.4.0200.0080
    Using specified port: 1489
    Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0200.0256 Linux64).
    
    real    0m3.763s
    user    0m6.265s
    sys     0m0.672s
    Wed Apr 14 07:45:19 UTC 2021
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ ls -l
    total 12
    drwxr-x--- 8 dmadmin dmadmin 4096 Apr 14 07:45 documentum
    -rw-r----- 1 dmadmin dmadmin 3245 Apr 14 07:45 log4j.log
    drwxr-x--- 6 dmadmin dmadmin 4096 Apr 14 07:44 test
    -rw-r----- 1 dmadmin dmadmin    0 Apr 14 07:45 trace.log
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ rm -rf documentum/ log4j.log  trace.log
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ mv test/log4j.properties ./
    [dmadmin@cs-0 config]$ mv test/dfc.properties ./
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ # With the folder empty exept for log4j.properties and dfc.properties files, dmqdocbroker is working
    [dmadmin@cs-0 config]$ date; time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping; date
    Wed Apr 14 07:47:17 UTC 2021
    dmqdocbroker: A DocBroker Query Tool
    dmqdocbroker: Documentum Client Library Version: 16.4.0200.0080
    Using specified port: 1489
    Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0200.0256 Linux64).
    
    real    0m4.280s
    user    0m8.161s
    sys     0m0.729s
    Wed Apr 14 07:47:21 UTC 2021
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ ls -l
    total 20
    drwxr-x--- 8 dmadmin dmadmin 4096 Apr 14 07:47 Shared
    -rw-r----- 1 dmadmin dmadmin 2153 Apr 14 07:47 dfc.keystore
    -rw-rw-r-- 1 dmadmin dmadmin  481 Jul 26  2020 dfc.properties
    -rwxr-x--- 1 dmadmin dmadmin 1242 Jul 26  2020 log4j.properties
    drwxr-x--- 6 dmadmin dmadmin 4096 Apr 14 07:46 test
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ rm -rf Shared/ dfc.keystore
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ mv test/dfc.keystore ./
    [dmadmin@cs-0 config]$ mv test/dfcfull.properties ./
    [dmadmin@cs-0 config]$ mv test/dfc.properties* ./
    [dmadmin@cs-0 config]$ mv test/log4j.properties* ./
    [dmadmin@cs-0 config]$ mv test/ServerApps ./
    [dmadmin@cs-0 config]$ mv test/Shared ./
    [dmadmin@cs-0 config]$ mv test/acs ./
    [dmadmin@cs-0 config]$ mv test/bpm ./
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ ls -l
    total 140
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26  2020 ServerApps
    drwxr-x--- 9 dmadmin dmadmin  4096 Jul 26  2020 Shared
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26  2020 acs
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26  2020 bpm
    -rw-rw-r-- 1 dmadmin dmadmin  2152 Jul 26  2020 dfc.keystore
    -rw-rw-r-- 1 dmadmin dmadmin   481 Jul 26  2020 dfc.properties
    -rw-rw---- 1 dmadmin dmadmin    70 Jul 22  2020 dfc.properties.bak.0
    -rwxr-x--- 1 dmadmin dmadmin   242 Jul 26  2020 dfc.properties.bak.1
    -rw-rw-r-- 1 dmadmin dmadmin   271 Jul 26  2020 dfc.properties.bak.2
    -rw-rw-r-- 1 dmadmin dmadmin   323 Jul 26  2020 dfc.properties.bak.3
    -rw-rw-r-- 1 dmadmin dmadmin   481 Jul 26  2020 dfc.properties.bak.4
    -rw-rw-r-- 1 dmadmin dmadmin   482 Jul 26  2020 dfc.properties.bak.5
    -rwxrwx--- 1 dmadmin dmadmin 79268 Jul 22  2020 dfcfull.properties
    -rwxr-x--- 1 dmadmin dmadmin  1242 Jul 26  2020 log4j.properties
    drwxr-x--- 2 dmadmin dmadmin  4096 Apr 14 07:51 test
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ ls -l test/
    total 0
    -rwxr-x--- 1 dmadmin dmadmin 0 Jul 22  2020 dbor.properties
    -rw-rw-r-- 1 dmadmin dmadmin 0 Jul 26  2020 dbor.properties.lck
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ # With the full folder except the dbor files, dmqdocbroker is still working
    [dmadmin@cs-0 config]$ date; time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping; date
    Wed Apr 14 07:51:30 UTC 2021
    dmqdocbroker: A DocBroker Query Tool
    dmqdocbroker: Documentum Client Library Version: 16.4.0200.0080
    Using specified port: 1489
    Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0200.0256 Linux64).
    
    real    0m3.501s
    user    0m6.632s
    sys     0m0.666s
    Wed Apr 14 07:51:34 UTC 2021
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ mv test/dbor.properties* ./
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ # With the dbor files back, dmqdocbroker isn't working anymore
    [dmadmin@cs-0 config]$ date; time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping; date
    Wed Apr 14 07:51:56 UTC 2021
    ^C
    real    0m30.682s
    user    0m5.001s
    sys     0m0.424s
    Wed Apr 14 07:52:27 UTC 2021
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ mv dbor.properties.lck test/
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ # Removing just the dbor files again, dmqdocbroker is working again
    [dmadmin@cs-0 config]$ date; time dmqdocbroker -t cs-0.domain.com -p 1489 -c ping; date
    Wed Apr 14 07:52:36 UTC 2021
    dmqdocbroker: A DocBroker Query Tool
    dmqdocbroker: Documentum Client Library Version: 16.4.0200.0080
    Using specified port: 1489
    Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0200.0256 Linux64).
    
    real    0m3.185s
    user    0m5.546s
    sys     0m0.578s
    Wed Apr 14 07:52:39 UTC 2021
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ ll
    total 140
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26  2020 ServerApps
    drwxr-x--- 9 dmadmin dmadmin  4096 Jul 26  2020 Shared
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26  2020 acs
    drwxr-x--- 7 dmadmin dmadmin  4096 Jul 26  2020 bpm
    -rwxr-x--- 1 dmadmin dmadmin     0 Jul 22  2020 dbor.properties
    -rw-r----- 1 dmadmin dmadmin     0 Apr 14 07:52 dbor.properties.lck
    -rw-rw-r-- 1 dmadmin dmadmin  2152 Jul 26  2020 dfc.keystore
    -rw-rw-r-- 1 dmadmin dmadmin   481 Jul 26  2020 dfc.properties
    -rw-rw---- 1 dmadmin dmadmin    70 Jul 22  2020 dfc.properties.bak.0
    -rwxr-x--- 1 dmadmin dmadmin   242 Jul 26  2020 dfc.properties.bak.1
    -rw-rw-r-- 1 dmadmin dmadmin   271 Jul 26  2020 dfc.properties.bak.2
    -rw-rw-r-- 1 dmadmin dmadmin   323 Jul 26  2020 dfc.properties.bak.3
    -rw-rw-r-- 1 dmadmin dmadmin   481 Jul 26  2020 dfc.properties.bak.4
    -rw-rw-r-- 1 dmadmin dmadmin   482 Jul 26  2020 dfc.properties.bak.5
    -rwxrwx--- 1 dmadmin dmadmin 79268 Jul 22  2020 dfcfull.properties
    -rwxr-x--- 1 dmadmin dmadmin  1242 Jul 26  2020 log4j.properties
    drwxr-x--- 2 dmadmin dmadmin  4096 Apr 14 07:52 test
    [dmadmin@cs-0 config]$
    [dmadmin@cs-0 config]$ diff dbor.properties.lck test/dbor.properties.lck
    [dmadmin@cs-0 config]$

     

    So as you can see above (it’s rather long but I wanted to put all the evidences I gathered because I still cannot believe this is the cause of the issue), just removing/renaming the empty file “dbor.properties.lck” which was there, untouched, since almost 9 months is sufficient to have the dmqdocbroker/iapi/idql working again… Trying to put back the old empty file, the issue will come back. It’s the “same” file, same content (empty), same file format, everything… The only difference would be the Inode of course and the creation/modification dates.

    After some more investigations, the issue appeared to be on the NAS behind which was still having a lock on the file, somehow. For information, I also had the same behavior on a second environment but with the file “$DOCUMENTUM/config/ServerApps/identityInterprocessMutex.lock” this time… So if that even happen to you, take a look at these lock files under $DOCUMENTUM/config and make sure there are no problems with the storage.

    Cet article Documentum – dmqdocbroker/iapi/idql not working because of dbor.properties.lck est apparu en premier sur Blog dbi services.

    Documentum – E_INTERNAL_SERVER_ERROR on D2-REST Product page related to GUAVA libraries after WebLogic PSU

    $
    0
    0

    At a customer, the D2-REST (16.5.1) application hosted on WebLogic Server 12c started showing 500 Internal Server Errors, after a customer release including many things. The error was rather simple to replicate since opening the D2-REST Product info page was sufficient (https://<host>/D2-REST/product-info). The URL was returning the following:

    At the same time, on the logs:

    2021-04-26 06:46:20,340 UTC [ERROR] ([ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.documentum.rest.util.LogHelper        : LogId: 9b360f83-335a-413e-87e3-481ba5cbf168, Status: 500, code: E_INTERNAL_SERVER_ERROR, message: An internal server error occurs.
    org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.NoSuchMethodError: com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
            at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:982)
            at com.emc.documentum.rest.servlet.RestDispatcherServlet.doDispatch(RestDispatcherServlet.java:33)
            at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
            at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
            at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
            at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
            at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
            at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
            at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
            at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.rest.filter.ApplicationFilter.doFilter(ApplicationFilter.java:33)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
            at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.d2.rest.filter.AppValidationFilter.doFilter(AppValidationFilter.java:35)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
            at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
            at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
            at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
            at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)
            at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
            at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
            at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
            at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66)
            at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
            at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
            at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214)
            at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
            at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)
            at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.d2.rest.filter.AppInfoFilter.doFilter(AppInfoFilter.java:39)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.rest.security.filter.RepositoryNamingFilter.doFilter(RepositoryNamingFilter.java:40)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.rest.filter.RestCorsFilter.doFilterInternal(RestCorsFilter.java:47)
            at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
            at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.rest.filter.CompressionFilter.doFilter(CompressionFilter.java:73)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.rest.log.MessageLoggingFilter.doFilter(MessageLoggingFilter.java:69)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at com.emc.documentum.rest.security.filter.ExceptionHandlerFilter.doFilterInternal(ExceptionHandlerFilter.java:31)
            at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
            at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3706)
            at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3672)
            at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344)
            at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
            at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
            at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
            at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
            at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
            at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
            at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
            at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
            at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
            at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
            at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
            at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
            at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
            at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
            at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
            at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
    Caused by: java.lang.NoSuchMethodError: com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
            at com.emc.documentum.d2fs.controller.D2AppInfoController.attribute(D2AppInfoController.java:160)
            at com.emc.documentum.d2fs.controller.D2AppInfoController.getProductInfo(D2AppInfoController.java:94)
            at com.emc.documentum.d2fs.controller.D2AppInfoController.get(D2AppInfoController.java:65)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
            at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
            at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
            at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:849)
            at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:760)
            at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
            at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
            ... 72 common frames omitted
    2021-04-26 06:46:20,414 UTC [INFO ] ([ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.documentum.rest.util.LogHelper        : XMLOutputFactory loaded com.ctc.wstx.stax.WstxOutputFactory.
    2021-04-26 06:46:20,416 UTC [INFO ] ([ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.documentum.rest.util.LogHelper        : XMLInputFactory loaded com.ctc.wstx.stax.WstxInputFactory.
    2021-04-26 06:46:20,451 UTC [INFO ] ([ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.documentum.rest.util.LogHelper        : Class com.emc.documentum.rest.config.DataBindingRuntime addLastPropertySource rest-api-data-binding.properties.

     

    The recently deployed release contained many things but looking into it in details, the most promising suspect was the Oracle WebLogic Server PSU (+coherence patch) from April 2021. Based on the logs, this looked like a GUAVA (Google core libraries for Java) related issue. Usually, the D2-REST application would be using its own application libraries but it might happen that for some security reasons, the configuration would be changed to force WebLogic to use the Oracle provided ones instead. This would be in order to keep the third-party libraries up-to-date, as much as possible, to reduce the potential security issues. At this customer, rollback the PSU would be a rather important security problem. After looking into the details, it was clear that the method mentioned above has been deleted in GUAVA 21.0 (deprecated in 20.0). On the other hand, D2 16.5.1 comes with GUAVA 13.0.1 by default and D2-REST (+ D2-Smartview) comes with GUAVA 20.0. As part of the April PSU, this library was probably upgraded to 21.0 (I didn’t find any confirmation). Therefore, I tried to force D2-REST to re-use its internal GUAVA libraries instead (while keeping the others from WebLogic) by adding a new line inside the “<prefer-application-packages>” section:

    [weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS/D2-REST/WEB-INF/
    [weblogic@wsd2rest-0 WEB-INF]$ cat weblogic.xml
    <?xml version="1.0" encoding="UTF-8"?>
    
    <weblogic-web-app>
      ...
      <container-descriptor>
        <!--prefer-web-inf-classes>true</prefer-web-inf-classes-->
        <prefer-application-packages>
          <package-name>org.slf4j</package-name>
          <package-name>com.google.common.*</package-name>
        </prefer-application-packages>
        <!--show-archived-real-path-enabled>true</show-archived-real-path-enabled-->
      </container-descriptor>
      ...
    </weblogic-web-app>
    [weblogic@wsd2rest-0 WEB-INF]$

     

    Adding the line 11 above forces WebLogic to load the application specific packages instead of its own. After a Managed Server restart, the issue was gone, which confirms that the April PSU was the culprit:

    Since we force WebLogic to not use its own jar files for some Google libraries, that means that potential security issues related to these jar files are obviously re-opened… However, at some point, you have a choice to make between being secure but having a non-working application OR potentially having some flaws but a working application. It’s obviously possible to go one-step further and instead of using “<package-name>com.google.common.*</package-name>“, which is rather generic, use a more refined definition of the package so that the scope affected is smaller.

    The same applies to D2-Smartview as well since it is also a REST client, so it relies heavily on such packages…

    Cet article Documentum – E_INTERNAL_SERVER_ERROR on D2-REST Product page related to GUAVA libraries after WebLogic PSU est apparu en premier sur Blog dbi services.


    Documentum – Applying Content-Security-Policy (CSP) on D2 while using WSCTF plugin

    $
    0
    0

    If you have ever been working on a sensitive Documentum environment (they are all sensitive, are they not?!), you might already have worked on hardening your Web Servers. One of these aspects is to have a specific set of HTTP Security Headers. In this blog, I will talk about one in particular, which is the Content-Security-Policy (CSP).

     

    The recommendations are usually to setup a set of headers. Here is an example of header names and values (that are/should be considered secure):

    • X-XSS-Protection: 1; mode=block
    • X-Content-Type-Options: nosniff
    • Content-Security-Policy: default-src ‘none’; script-src ‘self’; connect-src ‘self’; img-src ‘self’; style-src ‘self’;
    • X-Frame-Options: SAMEORIGIN
    • Cache-Control: no-cache, no-store
    • Pragma: no-cache
    • Strict-Transport-Security: max-age=63072000; includeSubDomains

     

    In case you have never heard of the CSP, I like the documentation that Mozilla provides, it is very clear and provides all the necessary information for you to understand how things works as well as what you can and cannot configure.

     

    The configuration of the CSP is really application dependent because it controls what the browser should be allowed to execute/fetch/render based on the value of the HTTP Header. With the above example, a lot of things will be disabled completely because the default-src is set to ‘none’ and everything that isn’t specifically defined in the HTTP Header will fallback with the value of the default-src parameter. This means that, for example, the browser will not even allow the load of a font from a ttf file (some application like D2 tries to load ttf files). Everything set with ‘self’ means that if it’s a resource coming from the same server (same scheme, host/dns/domain, and port), then it will be allowed. For other details, I would strongly suggest you look at the Mozilla documentation.

     

    Applying all other HTTP Security Headers to D2 shouldn’t cause too many issues but applying the CSP as depicted in the example will completely break it. Here is a screenshot of the Google Chrome console with the “recommended” settings from a security point of view (Content-Security-Policy: default-src ‘none’; script-src ‘self’; connect-src ‘self’; img-src ‘self’; style-src ‘self’;):

    Regarding the CSP, the usage of the ‘unsafe-inline’, ‘unsafe-eval’ or ‘data:’ directives are usually considered insecure. Unfortunately, most application (D2 isn’t an exception) will require some of these directives in order to work, as you can see on the above screenshot. There is always the option to use the ‘nonce-*’ or the hash value but that will require you to configure each and every resource, one by one… When you have hundreds of applications to manage that each tries to load dozens of different resources, that will most likely become an issue. Therefore, you will most probably end-up with a more relaxed configuration. Let’s try D2 with a more realistic CSP based on the above errors (Content-Security-Policy: default-src ‘none’; script-src ‘self’ ‘unsafe-inline’ ‘unsafe-eval’; connect-src ‘self’; img-src ‘self’; style-src ‘self’ ‘unsafe-inline’; font-src ‘self’; manifest-src ‘self’; frame-src ‘self’;):

    That’s still not enough and that’s the purpose of this blog. The configuration, until now, is rather simple: you configure your Web Server, reload, and look for errors on the Chrome console. However, as you can see in the second screenshot, there is a problem with the D2 WSCTF plugin.

     

    When D2 is configured to use the WSCTF plugin, it will actually execute a piece of code on your workstation that is being accessed by D2 (by the browser) through a socket using the associated protocol (WebSockets Secure – “wss://”). Therefore, this needs to be added into the allowed connection source using “connect-src wss:”. Unless I’m mistaken, I don’t think it is possible to filter this configuration further. However, doing that isn’t sufficient, it will still fail with the latest error shown in the previous screenshot: Refused to frame ” because it violates the following Content Security Policy directive: “frame-src ‘self'”. The frame ” is actually also because of the WSCTF plugin, to avoid redirections at the browser level when D2 talks to the plugin. Documentum created its own custom protocol that is being used for that purpose and that’s what is still missing.

     

    In order to fix this issue and allow the WSCTF plugin to work, the needed configuration is “frame-src dctmctf:”. This might be documented somewhere but I have never seen it before. To find that, I have been looking at the JavaScript code being executed in the browser (by putting a breakpoint) and it gave me the following:

    As shown, the frame being started begins with “dctmctf:” and therefore, allowing the frame source on that scheme is fixing the issue (yes all the messages are in red, meaning it’s an “ERROR” but that’s how D2 prints these info messages…):

    Therefore, in case you are using D2 (and a lot of other applications), a more realistic CSP configuration will most probably be something like:

    Content-Security-Policy: default-src ‘none’; script-src ‘self’ ‘unsafe-inline’ ‘unsafe-eval’; connect-src ‘self’ wss:; img-src ‘self’ data:; style-src ‘self’ ‘unsafe-inline’; font-src ‘self’; manifest-src ‘self’; frame-src ‘self’ dctmctf:;

     

    As mentioned at the beginning of this blog, CSP is really application dependent. Unfortunately, most apps aren’t built with CSP in mind and therefore you must make concessions to be able to strengthen your Web Servers without breaking your applications.

    Cet article Documentum – Applying Content-Security-Policy (CSP) on D2 while using WSCTF plugin est apparu en premier sur Blog dbi services.

    Documentum – r_server_version reset to P00 instead of correct patch version

    $
    0
    0

    A few weeks ago, an issue was identified at a customer which would cause the value of “dm_server_config.r_server_version” to be reset to the GA version number (16.4 P00) instead of the currently deployed patch (which is 16.4 P26) and that would happen randomly on some of the Content Servers but not all. The different Content Servers are all deployed on a Kubernetes Cluster, are using a single image (which contains the patched binaries) and are using replicas to provide High Availability. That really means that it’s the same image everywhere but despite that, some of them would, at some point, end-up with the P00 shown.

    Every time the pods would start/restart, the “dm_server_config.r_server_version” would properly display the P26 version, without exception. I spent several hours doing testing on that issue, but I was never able to replicate the issue by simply restarting the Content Server. At startup, a Content Server will update the value of “dm_server_config.r_server_version” with the currently used binaries (returned by the “documentum -v” command). I tried enabling the RPC and SQL traces after some discussion with the OpenText Support, but it didn’t show anything useful.

    Since it appeared randomly and since I couldn’t replicate the issue by restarting the Content Server, I just simply let an environment up&running without any user activity on it and I was checking every day the value of the “dm_server_config.r_server_version”. During the week, nothing happened, the P26 was constantly shown and the dm_server_config object wasn’t updated in any way. However, after the weekend, it was suddenly showing P00 so I started looking into the details:

    [morgan@k8s_master ~]$ kubectl get pod cs-0
    NAME   READY   STATUS    RESTARTS   AGE
    cs-0   1/1     Running   0          6d22h
    [morgan@k8s_master ~]$
    [morgan@k8s_master ~]$ kubectl describe pod cs-0 | grep Started
          Started:      Mon, 31 Jan 2022 10:53:48 +0100
    [morgan@k8s_master ~]$
    [morgan@k8s_master ~]$ kubectl exec -it cs-0 -- bash -l
    [dmadmin@cs-0 ~]$
    [dmadmin@cs-0 ~]$ date; iapi ${DOCBASE_NAME} -Udmadmin -Pxxx << EOC
    > retrieve,c,dm_server_config
    > dump,c,l
    > EOC
    Mon Feb  7 07:57:00 UTC 2022
    ...
    SYSTEM ATTRIBUTES
    
      r_object_type                   : dm_server_config
      r_creation_date                 : 7/15/2021 11:38:28
      r_modify_date                   : 2/6/2022 03:19:25
      ...
      r_server_version                : 16.4.0000.0248  Linux64.Oracle
      r_host_name                     : cs-0
      ...
    
    API> Bye
    
    [dmadmin@cs-0 ~]$
    [dmadmin@cs-0 ~]$ grep --color "16\.4\.0" $DOCUMENTUM/dba/log/${DOCBASE_NAME}.log
        OpenText Documentum Content Server (version 16.4.0260.0296  Linux64.Oracle)
    2022-01-31T09:59:36.943158      6994[6994]      0000000000000000        [DM_FULLTEXT_T_QUERY_PLUGIN_VERSION]info:  "Loaded FT Query Plugin: /app/dctm/server/product/16.4/bin/libDsearchQueryPlugin.so, API Interface version: 1.0, Build number: HEAD; Feb 14 2018 04:27:20, FT Engine version: xPlore version 16.4.0160.0089"
    Mon Jan 31 10:00:50 2022 [INFORMATION] [AGENTEXEC 8215] Detected during program initialization: Version: 16.4.0260.0296  Linux64
    2022-02-06T03:04:00.114945      23067[23067]    0000000000000000        [DM_FULLTEXT_T_QUERY_PLUGIN_VERSION]info:  "Loaded FT Query Plugin: /app/dctm/server/product/16.4/bin/libDsearchQueryPlugin.so, API Interface version: 1.0, Build number: HEAD; Feb 14 2018 04:27:20, FT Engine version: xPlore version 16.4.0160.0089"
    2022-02-06T03:19:25.601602      31553[31553]    0000000000000000        [DM_FULLTEXT_T_QUERY_PLUGIN_VERSION]info:  "Loaded FT Query Plugin: /app/dctm/server/product/16.4/bin/libDsearchQueryPlugin.so, API Interface version: 1.0, Build number: HEAD; Feb 14 2018 04:27:20, FT Engine version: xPlore version 16.4.0160.0089"
    [morgan@k8s_master ~]$

     

    As you can see on the above output, the modification date of the dm_server_config was Sunday 6-Feb-2022 at 03:19:25 UTC while the repository started on the Monday 31-Jan-2022 at 09:59 UTC. Until the Friday 4-Feb-2022, the returned version was P26 (16.4.0260.0296) but then after the weekend, it was P00 (16.4.0000.0248). The grep command above was used initially to verify that the repository started with the P26, that’s the very first line of the Repository log file: “OpenText Documentum Content Server (version 16.4.0260.0296 Linux64.Oracle)”. However, on the Sunday morning, suddenly there are two new messages shown related to the FT Query Plugin initialization. This usually means that the Content Server was re-initialized and that’s indeed what happened:

    [dmadmin@cs-0 ~]$ grep "DM_SESSION_I_INIT_BEGIN.*Initialize Server Configuration" $DOCUMENTUM/dba/log/${DOCBASE_NAME}.log
    2022-01-31T09:59:34.302520      6994[6994]      0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Server Configuration."
    2022-02-06T03:03:56.846840      23067[23067]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Server Configuration."
    2022-02-06T03:19:23.644453      31553[31553]    0000000000000000        [DM_SESSION_I_INIT_BEGIN]info:  "Initialize Server Configuration."
    [dmadmin@cs-0 ~]$

     

    Therefore, something must have triggered the reinit and that would most probably be a job. One that would have run around 03:04 UTC and one around 03:20 UTC. Looking at the dm_sysobject created around that time gave me two good candidates of jobs that would have performed the reinit of the Content Server and therefore that might be the cause of the switch from P26 to P00:

    API> ?,c,select r_object_id, r_creation_date, r_modify_date, object_name from dm_sysobject where r_creation_date>=date('06.02.2022 03:00:00','dd.mm.yyyy hh:mi:ss') and r_creation_date<=date('06.02.2022 03:25:00','dd.mm.yyyy hh:mi:ss');
    r_object_id       r_creation_date            r_modify_date              object_name
    ----------------  -------------------------  -------------------------  ----------------------------------
    090f45158029f8a6  2/6/2022 03:03:13          2/6/2022 03:03:14          2/6/2022 03:03:11 dm_Initialize_WQ
    090f45158029f8a9  2/6/2022 03:04:14          2/6/2022 03:04:15          2/6/2022 03:03:44 dm_DMClean
    090f45158029f8aa  2/6/2022 03:04:11          2/6/2022 03:04:11          Result.dmclean
    090f45158029f8ad  2/6/2022 03:04:15          2/6/2022 03:04:16          2/6/2022 03:04:14 dm_WfmsTimer
    090f45158029f8b4  2/6/2022 03:11:46          2/6/2022 03:11:47          2/6/2022 03:11:42 dm_Initialize_WQ
    090f45158029f8b7  2/6/2022 03:12:52          2/6/2022 03:12:53          2/6/2022 03:12:12 DocumentTraining
    090f45158029fc1e  2/6/2022 03:16:15          2/6/2022 03:16:15          2/6/2022 03:16:13 dm_Initialize_WQ
    090f45158029fc21  2/6/2022 03:22:17          2/6/2022 03:22:17          2/6/2022 03:19:13 dm_DMFilescan
    090f45158029fc22  2/6/2022 03:22:14          2/6/2022 03:22:14          Result.dmfilescan
    090f45158029fc28  2/6/2022 03:23:48          2/6/2022 03:23:48          2/6/2022 03:23:43 dm_Initialize_WQ
    (10 rows affected)

     

    As shown above, the first reinit was most probably triggered by the dm_DMClean job while the second one was most probably coming from the dm_DMFilescan job: if you look at the start time of these jobs (check the object_name) and the completion time (check the Result line), then the reinit is right in the middle of it. Just in case, looking at the “a_last_completion” for these two jobs also confirmed the same:

    API> ?,c,select r_object_id, a_last_completion, a_next_invocation, object_name from dm_job where object_name in ('dm_DMClean','dm_DMFilescan');
    r_object_id       a_last_completion          a_next_invocation          object_name
    ----------------  -------------------------  -------------------------  -------------
    080f45158000035b  2/6/2022 03:04:14          2/13/2022 03:00:00         dm_DMClean
    080f45158000035c  2/6/2022 03:22:17          2/13/2022 03:15:00         dm_DMFilescan

     

    Knowing that I got two good candidates, I obviously had to try manually to reproduce the issue. Therefore, I restarted the repository to get back to P26:

    [dmadmin@cs-0 ~]$ date; iapi ${DOCBASE_NAME} -Udmadmin -Pxxx << EOC
    > ?,c,select r_object_id, r_creation_date, r_modify_date, r_server_version from dm_server_config;
    > ?,c,select r_object_id, a_last_completion, a_next_invocation, object_name from dm_job where object_name in ('dm_DMClean','dm_DMFilescan');
    > EOC
    Mon Feb  7 08:45:13 UTC 2022
    ...
    Session id is s0
    API> 
    r_object_id       r_creation_date            r_modify_date              r_server_version
    ----------------  -------------------------  -------------------------  --------------------------------
    3d0f451580000102  7/15/2021 11:38:28         2/7/2022 08:41:02          16.4.0260.0296  Linux64.Oracle
    (1 row affected)
    
    API> 
    r_object_id       a_last_completion          a_next_invocation          object_name
    ----------------  -------------------------  -------------------------  -------------
    080f45158000035b  2/6/2022 03:04:14          2/13/2022 03:00:00         dm_DMClean
    080f45158000035c  2/6/2022 03:22:17          2/13/2022 03:15:00         dm_DMFilescan
    (2 rows affected)
    
    API> Bye
    [dmadmin@cs-0 ~]$

     

    Then, I ran the dm_DMClean (updated the a_next_invocation and window_interval so that the job can start), checked that it performed the reinit of the Content Server and verified the “dm_server_config.r_server_version” value:

    [dmadmin@cs-0 ~]$ date; iapi ${DOCBASE_NAME} -Udmadmin -Pxxx << EOC
    > ?,c,select r_object_id, r_creation_date, r_modify_date, r_server_version from dm_server_config;
    > ?,c,select r_object_id, a_last_completion, a_next_invocation, object_name from dm_job where object_name in ('dm_DMClean','dm_DMFilescan');
    > EOC
    Mon Feb  7 08:50:39 UTC 2022
    ...
    Session id is s0
    API> 
    r_object_id       r_creation_date            r_modify_date              r_server_version
    ----------------  -------------------------  -------------------------  --------------------------------
    3d0f451580000102  7/15/2021 11:38:28         2/7/2022 08:41:02          16.4.0260.0296  Linux64.Oracle
    (1 row affected)
    
    API> 
    r_object_id       a_last_completion          a_next_invocation          object_name
    ----------------  -------------------------  -------------------------  -------------
    080f45158000035b  2/7/2022 08:49:19          2/8/2022 03:00:00          dm_DMClean
    080f45158000035c  2/6/2022 03:22:17          2/8/2022 03:15:00          dm_DMFilescan
    (2 rows affected)
    
    API> Bye
    [dmadmin@cs-0 ~]$

     

    The reinit of the Content Server happened but it didn’t change the “dm_server_config.r_modify_date”, maybe because it was already showing P26 so nothing had to be updated? The only thing that changed is the “dm_job.a_last_completion” obviously, since the job ran. This means that the dm_DMClean is probably not the culprit, so I did the same for the dm_DMFilescan:

    [dmadmin@cs-0 ~]$ date; iapi ${DOCBASE_NAME} -Udmadmin -Pxxx << EOC
    > ?,c,select r_object_id, r_creation_date, r_modify_date, r_server_version from dm_server_config;
    > ?,c,select r_object_id, a_last_completion, a_next_invocation, object_name from dm_job where object_name in ('dm_DMClean','dm_DMFilescan');
    > EOC
    Mon Feb  7 08:59:23 UTC 2022
    ...
    Session id is s0
    API> 
    r_object_id       r_creation_date            r_modify_date              r_server_version
    ----------------  -------------------------  -------------------------  --------------------------------
    3d0f451580000102  7/15/2021 11:38:28         2/7/2022 08:52:34          16.4.0000.0248  Linux64.Oracle
    (1 row affected)
    
    API> 
    r_object_id       a_last_completion          a_next_invocation          object_name
    ----------------  -------------------------  -------------------------  -------------
    080f45158000035b  2/7/2022 08:49:19          2/8/2022 03:00:00          dm_DMClean
    080f45158000035c  2/7/2022 08:58:44          2/8/2022 03:15:00          dm_DMFilescan
    (2 rows affected)
    
    API> Bye
    [dmadmin@cs-0 ~]$
    [dmadmin@cs-0 ~]$ grep Report $DOCUMENTUM/dba/log/${DOCBASE_NAME}/sysadmin/DMFilescanDoc.txt
    DMFilescan Report For DocBase REPO01 As Of 2/7/2022 08:52:22
    Report End  2/7/2022 08:58:43
    [dmadmin@cs-0 ~]$

     

    As you can see above, the dm_DMFilescan did change the “dm_server_config.r_server_version” to P00 and therefore the “dm_server_config.r_modify_date” was also updated. Checking the dm_DMFilescan job report shows that it took around 6min to complete and the update of the dm_server_config object happened around 10s after the start of the job.

    Therefore, the reason why the “dm_server_config.r_server_version” is being changed “randomly” from P26 back to P00 isn’t actually random but it’s due to the execution of the dm_DMFilescan job. On HA environment, since this job can run on any of the available Content Servers, it gave a sense of randomness but it’s not. The same information was provided to OpenText and the bug CS-136387 was opened for the same.

    While doing further checks to try to understand the root cause, I saw the following on the method’s dmbasic scripts:

    [dmadmin@cs-0 ~]$ cd $DM_HOME/bin/
    [dmadmin@cs-0 bin]$
    [dmadmin@cs-0 bin]$ documentum -v
    OpenText Documentum Release Version: 16.4.0260.0296  Linux64.Oracle
    [dmadmin@cs-0 bin]$
    [dmadmin@cs-0 bin]$ ls -ltr dmfilescan* dmclean*
    -rwxr-x--- 1 dmadmin dmadmin 13749675 Nov 11 13:38 dmfilescan
    -rwxr-x--- 1 dmadmin dmadmin 13769063 Nov 11 13:38 dmclean.patch.bak
    -rwxr-x--- 1 dmadmin dmadmin 13866290 Nov 11 13:39 dmclean
    [dmadmin@cs-0 bin]$
    [dmadmin@cs-0 bin]$ strings dmfilescan | grep "16.4"
    16.4.0000.0248
    [dmadmin@cs-0 bin]$
    [dmadmin@cs-0 bin]$ strings dmclean.patch.bak | grep "16.4"
    16.4.0000.0248
    [dmadmin@cs-0 bin]$
    [dmadmin@cs-0 bin]$ strings dmclean | grep "16.4"
    16.4.0260.0296
    [dmadmin@cs-0 bin]$

     

    As shown above, the reason is the Documentum patching of the binaries:

    • dmclean: the dmbasic script is being updated properly and the version number that seems to be hardcoded inside it reflects the P26
    • dmfilescan: the dmbasic script isn’t being updated by the patch (there is no “*.patch.bak” file) and therefore it still contains the P00 hardcoded version

     

    Cet article Documentum – r_server_version reset to P00 instead of correct patch version est apparu en premier sur Blog dbi services.

    Documentum – IDS on Windows Server not able to start with error 31: device not functioning

    $
    0
    0

    Documentum Interactive Delivery Services or IDS is a Documentum product that can be useful to publish some documents to an external web server or something similar. It usually works rather well, even if there hasn’t been much changes in the product in years, maybe because it does what it is supposed to do… As a big fan of Linux systems, I pretty much never work on Windows Servers but when I do, somehow, there are always trouble! Maybe I’m cursed or maybe the OS is really not for me…

    Last time I worked on a Windows Server, I had to install an IDS 7.3 on a bunch of servers (POC, DEV, QA, PRD). The POC installation went smoothly and everything was working as expected but then trouble started with the other three where the IDS Service couldn’t start at all with an error 31: device not functioning.

    This is a rather generic message, as often. Therefore, looking at the Event Viewer gave some more information:

    The text extract of this event is:

    <Event >
      <System>
        <Provider Name="DCTM WebCache Server" />
        <EventID Qualifiers="49152">1018</EventID>
        <Level>2</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2020-12-16T12:23:17.000000000Z" />
        <EventRecordID>3105</EventRecordID>
        <Channel>Application</Channel>
        <Computer>hostname.domain.com</Computer>
        <Security />
      </System>
      <EventData>
        <Data>Load JVM DLL Failed on LoadLibrary (%s)</Data>
        <Data>D:\ids\target\product\jre\win\bin\server\jvm.dll</Data>
      </EventData>
    </Event>

     

    It is rather clear that the issue is related to Java but everything looked good at first sight. Comparing the working server with the non-working one, they both had the same setup and same environment. Listing all environment variables between the two servers showed the same output except for a customer specific value that was apparently the base image used to install the Windows Server (2012R2 02.02 vs 2012R2 02.03). Looking into it further, even if the JAVA_HOME variable wasn’t set on any of the servers, I still tried to add it to see the behavior:

    • Click on the Start button
    • Write “Edit the system environment variables” on the search and click on it
    • Click on the Environment Variables button
    • Create the system (bottom of screen) variable JAVA_HOME with: D:\ids\target\product\jre\win (or whatever path you have installed your IDS to)
    • Update the system (bottom of screen) variable PATH, prepend it with: %JAVA_HOME%\bin;

    After doing that, the IDS Service was actually able to start… I do not have the complete explanation but this issue must have been caused by the different OS build. Even if they are both 2012R2 (latest supported version of IDS 7.3), there must be some differences in the customer specific build (automated OS installation) that caused the issue to happen whenever the JAVA_HOME isn’t setup in the environment. This is normally not needed by IDS since Java is included into this product directly and therefore, all the commands and libraries already point to the expected path. Nevertheless, if you are facing the same issue, it might be worth giving it a try!

     

    L’article Documentum – IDS on Windows Server not able to start with error 31: device not functioning est apparu en premier sur dbi Blog.

    Documentum – RCS/CFS Upgrade in silent fails with IndexOutOfBoundsException

    $
    0
    0

    Several years ago, I wrote a series of blogs regarding the silent installation of Documentum Components, including for a RCS/CFS (HA part of a Repository). In there, I described the process and gave an example of properties file, with all the parameters that are needed and a quick explanation for each of them. As I described in the previous blogs, and that is true for most of Documentum components, in case you want to upgrade instead of installing from scratch, then you more or less just have to change the “CREATE” action to “UPGRADE“. There is, however, a small specificity for the Remote Content Server and that is the point of this blog.

    Trying to upgrade a RCS/CFS by reusing the install silent properties file with the UPGRADE action will give something like that (with DEBUG logs enabled):

    [dmadmin@cs-2 ~]$ cat $DM_HOME/install/logs/install.log
    12:41:07,100 DEBUG [main]  - ###################The variable is: LOG_IS_READY, value is: true
    12:41:07,100 DEBUG [main]  - ###################The variable is: FORMATED_PRODUCT_VERSION_NUMBER, value is: 20.2.0000.0110
    12:41:07,101  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: CfsConfigurator
    12:41:07,101  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 20.2.0000.0110
    12:41:07,101  INFO [main]  -
    ...
    12:41:07,224 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to resolve variable
    12:41:07,225 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to check condition
    12:41:07,225 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to setup
    12:41:07,225 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - *******************Start action com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName***********************
    12:41:07,230 ERROR [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Index -1 out of bounds for length 3
    java.lang.IndexOutOfBoundsException: Index -1 out of bounds for length 3
            at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
            at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
            at java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
            at java.base/java.util.Objects.checkIndex(Objects.java:372)
            at java.base/java.util.ArrayList.get(ArrayList.java:459)
            at com.documentum.install.multinode.cfs.common.services.DiServerContentServers.getServer(DiServerContentServers.java:192)
            at com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName.setup(DiWAServerCfsTestServerConfigObjectName.java:23)
            at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:73)
            at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.al(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
            at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
            at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
            at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
            at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
            at com.zerog.ia.installer.Main.main(Unknown Source)
            at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.base/java.lang.reflect.Method.invoke(Method.java:566)
            at com.zerog.lax.LAX.launch(Unknown Source)
            at com.zerog.lax.LAX.main(Unknown Source)
    12:41:07,233  INFO [main]  - The INSTALLER_UI value is SILENT
    12:41:07,233  INFO [main]  - The KEEP_TEMP_FILE value is true
    ...
    [dmadmin@cs-2 ~]$

     

    This error shows that the installer is failing while trying to get some details of the repository to upgrade. The exception stack isn’t very clear about what exactly it is failing to retrieve: docbase name, dm_server_config name, hostname, service name or something else. Since I don’t have access to the source code, I worked with OpenText on the SR#4593447 to get the insight of what is missing. Turns out that it is actually the Service Name that cannot be found on the properties file. When a RCS/CFS is installed, it will use the property called “SERVER.DOCBASE_SERVICE_NAME” which is described in the previous blog about silent installation. This is the only parameter required for the Service Name. In case of an upgrade, you could think that the installer would be smart enough to go fetch the value from the server.ini directly or, at least, take the same parameter than during installation but that’s not the case. In fact, it only relies on the properties file and it uses another parameter that is only required for upgrade/delete: “SERVER.COMPONENT_NAME“.

    Therefore, if you want to upgrade a RCS/CFS, you will need to provide the Service Name for the “SERVER.COMPONENT_NAME” parameter (same value as “SERVER.DOCBASE_SERVICE_NAME“). It’s not a problem to put that in both install and upgrade properties file, you can put as much as you want in these, if Documentum doesn’t recognize the parameter, it will just ignore it. The OpenText Engineers weren’t able to find the reason why there are two different parameters for the same purpose but that comes from way back apparently…

    Anyway, once you add the parameter with its value and start again the upgrade of the RCS/CFS, it should work properly:

    [dmadmin@cs-2 ~]$ cat $DM_HOME/install/logs/install.log
    13:27:16,953 DEBUG [main]  - ###################The variable is: LOG_IS_READY, value is: true
    13:27:16,953 DEBUG [main]  - ###################The variable is: FORMATED_PRODUCT_VERSION_NUMBER, value is: 20.2.0000.0110
    13:27:16,954  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: CfsConfigurator
    13:27:16,954  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 20.2.0000.0110
    13:27:16,954  INFO [main]  -
    ...
    13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to resolve variable
    13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to check condition
    13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - Start to setup
    13:27:17,091 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - *******************Start action com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName***********************
    13:27:17,096 DEBUG [main]  - ###################The variable is: SERVER.SERVER_INI_FILE_NAME, value is: server_cs-2_Repo1.ini
    13:27:17,097 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsTestServerConfigObjectName - *******************************end of action********************************
    13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - Start to resolve variable
    13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - Start to check condition
    13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - Start to setup
    13:27:17,100 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - *******************Start action com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo***********************
    13:27:17,100 DEBUG [main]  - ###################The variable is: SERVER.DOCBASE_HOME, value is: $DOCUMENTUM/dba/config/Repo1
    13:27:17,101 DEBUG [main]  - ###################The variable is: common.old.aek.key.name, value is: aek.key
    13:27:17,101 DEBUG [main]  - ###################The variable is: common.aek.key.name, value is: aek.key
    13:27:17,101 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - The aek passphrase is ***************
    13:27:17,101 DEBUG [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsLoadAEKInfo - *******************************end of action********************************
    ...
    [dmadmin@cs-2 ~]$

     

    It’s not always easy to work with silent installation of Documentum because the documentation for that part is quite poor at the moment, this parameter is not documented anywhere for example. I mean, none of the parameters are documented for the CS part but at least there is a kind of “template” usually, under $DM_HOME/install/silent/templates. Unfortunately, this parameter doesn’t appear anyway. So, yes, it might be a little bit difficult but once you have it working, you can gain a lot so it’s still worth to sweat a little.

     

    L’article Documentum – RCS/CFS Upgrade in silent fails with IndexOutOfBoundsException est apparu en premier sur dbi Blog.

    Documentum – Configuration of an IDS Target Memory/RAM usage on Windows

    $
    0
    0

    A few months ago, I had to work on a Windows Server to setup an IDS Target. The installation and configuration of the target wasn’t that different compared to a Linux host, so it wasn’t difficult at all (if you ignore some strange behavior like described here for example). But there was one point for which I was a little bit skeptical: how do you configure the IDS Target Memory/RAM assignment for its JVM? On Linux, it’s very easy since the IDS Target configuration will create some start/stop scripts and in these, you can easily find the Java commands executed. Therefore, changing the JVM Memory is just adding the usual Xms/Xmx parameters needed there…

     

    Unfortunately, on Windows, IDS will setup a service automatically and this service uses a .exe file, which you, therefore, cannot modify in any way. OpenText (or rather EMC before) could have used a cmd or ps1 script to call the Java command, similarly to Linux or even used a java.ini file somewhere but that’s not the case.

     

    By default, the JVM will probably use something like 256Mb of RAM. The exact value will depend on the Java version and potentially on your server as well (how much RAM the host has). There are a lot of blogs or posts already on how to check how much memory is used by the JVM by default but for the quick reference, you can check that with something like:

    # Linux:
    java -XX:+PrintFlagsFinal -version | grep HeapSize
    
    # Windows:
    java -XX:+PrintFlagsFinal -version | findstr HeapSize

     

    Having 256Mb of RAM for the IDS Target might be sufficient if the number of files to transfer is rather “small”. However, at some point, you might end-up facing an OutOfMemory error, most probably whenever the IDS Target tries to open the properties.xml file from the previous full-sync or directly during the initial full-sync. If the file is too big (bigger than the Memory of the JVM), it will probably end-up with the OOM and your synchronization will fail.

     

    Therefore, how do you increase the default IDS Target JVM settings on Windows? It’s actually not that complicated but you will need to update the registry directly:

    • Open regedit on the target Windows Server
    • Navigate to (that’s an example with secure IDS on port 2787, your path might be different):
      • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\OpenText Documentum IDS Target_secure_2787\Env
    • Double click on the registry key inside this folder named “Values
    • Update the “jvmoptions” definition (around the end normally) to add the Xms and Xmx parameters like:
      • from: “jvmoptions=-Dfile.encoding=UTF-8
      • to: “jvmoptions=-Dfile.encoding=UTF-8 -Xms2g -Xmx4g
    • Restart the IDS Target Service

     

     

    With that, the IDS Target should now be allowed to use up to 4GB of RAM, hopefully, which should give you some space to have proper synchronization without OutOfMemory.

     

    L’article Documentum – Configuration of an IDS Target Memory/RAM usage on Windows est apparu en premier sur dbi Blog.

    Viewing all 173 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>