question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: The knowledge base article describes the various alarm capabilities within Aspen Tank and Operations Manager.
Solution: Aspen Tank and Operations Manager has three types of standard, built-in alarms. 1. 90% Target - This alarm is raised for a movement when the accrued mass quantity reaches 90% of the scheduled quantity. 2. Unauthorized Mov - This alarm is raised for a tank when the volume increases or decreases, but there is no active movement for the tank 3. Struck Gauge - This alarm is raised when there is an active movement for a tank, but the level or volume instrument does not change It is also possible to generate a customized alarm through database triggers specific to your site requirements. However, this type of custom work is typically implemented by AspenTech's Professional Services team on a consulting basis as deep knowledge of the Aspen Tank and Operations Manager database table structure is required. An example of such custom alarm could be an '80% Target Alarm'. Keywords: None References: None
Problem Statement: Aspen AtOMS cannot create a movement when the control element is an instrument. The instrument has been associated to the pipe part of the movement. The node instrument detail shows the instrument, but the screen in AtOMS does not. This Knowledge Base article shows how to overcome this problem.
Solution: The reason why AtOMS cannot create a movement when the control element is an instrument is that lineups in AtOMS start with the same DBINDEX (10000) as the standard Advisor DBINDEX. TheSolution to the problem is to run a query to change the DBINDEX of AtOMS Lineups to begin with a number a lot higher than the standard Advisor DBINDEX, something like 500000. Below is a set of four queries that does that: Script 1: update ATOMS_NODE_INSTRUMENT set ATOMS_NODE_INSTRUMENT.NODE_ID = ATOMS_NODE_INSTRUMENT.NODE_ID + 500000 WHERE ATOMS_NODE_INSTRUMENT.APPLICATION_FLAG ='10000000' AND ATOMS_NODE_INSTRUMENT.NODE_ID IN (SELECT DISTINCT DBINDEX FROM ATOMS_LINEUP) Script 2: UPDATE ATOMS_LINEUP SET DBINDEX = DBINDEX + 500000 Script 3: update ATOMS_LINEUP SET PIPE_ID = PIPE_ID + 500000 where use_temp = 1 Script 4: UPDATE ATOMS_MOVEMENT set lineup_id = lineup_id + 500000 where lineup_id <> 0 NOTE: An enhancement request has been submitted to development to modify the Aspen Database Wizard to start AtOMS lineups with a much higher number than the standard Advisor DBINDEX. Keywords: line up line-up db index References: None
Problem Statement: When a movement is not linked to an order, column ORDER_ID is 0 AND all movements with ORDER_ID=0 appear in the window 'Attach Movements to order' because Atoms will retrieve from the database and display in the form all of the movements not assigned to orders (ORDER_ID=0). Then, when there are too many movements not associated with orders, the large amount of data retrieved causes the AtOMS VB form to overflow and a blank error message appears. This Knowledge Base article provides steps to resolve the above error message.
Solution: Please download and review the attached MS Word document which contain full explanation of the problem and provides queries that resolve the issue. Keywords: OK error References: None
Problem Statement: Having problems with Aspen Production Execution Manager CompileProc command - Connection refused message.
Solution: Can you do a basic test of eBRS/Apache connection? From the flags.m2r_cfg file you can concatenate together a string to test the Apache communication. Look at these four lines, and then put together a URL like the one below, and test it: # Synchronized timestamp from servlet - if not defined timestamp is obtained from DB (see DB_SERVER_TZ) TIMESTAMP_HOST = <server name> TIMESTAMP_PORT = 8080 TIMESTAMP_URI = /AeBRSserver/servlet/aebrsutctime The result of the path: http://<server name>:8080/AeBRSserver/servlet/aebrsutctime This should basically bring back an Internet Explorer screen with a long string of integers (UTC timestamp from Apache). If it returns 'The page cannot be displayed' then the issue is because of the Apache Tomcat software not functioning properly. Keywords: Apache Tomcat UTCTime connection refused References: None
Problem Statement: After a new install of Aspen AtOMS client, it throws the following error message AtOMS Client Error connecting to the AtOMS Monitor. Please contact System Administrator. Refresh interval of the display was set to 5 min. Error=463. Desc= class not registered on local machine
Solution: This error might be caused by a missing component due to a dll not being registered properly during the installation. Re-registering the AtOMSServerDbLib.dll and AtOMSMonitorps.dll ('..\Program Files\AspenTech\AtOMS'), as shown below, should help resolving the aforementioned error: 1. Open a command prompt (START-->Run -->Type CMD). 2. Type: Regsvr 32 <Atoms root folder>\AtOMSServerDbLib.dll 3. Type: Regsvr 32 <Atoms root folder>\AtOMSMonitorps.dll Keywords: Refresh Interval error=463 Atoms monitor References: None
Problem Statement: When you open the Tank summary view from the Aspen Tank and Operations Manager Client aka AtOMS client it displays different statuses for the tanks. What are these statuses and what do they mean to the user?
Solution: Aspen Tank and Operations Manager ( AtOMS) has three “Real Time Status Indicators” for the tanks. They are - Static - Up and - Down “Static” status means that the tank is currently not involved in any of the operations. “Up” status means that the tank is currently receiving some material from Plant or tank. “Down” status means that the tank is currently discharging some material to other source like Units or tank. All these statuses are “Real Time (Current)” and refers to specific point of time. Please note that the history of these statuses are not saved in the database. Keywords: Tank status Tank Summary AtOMS Up Down Statuc References: None
Problem Statement: Order status for each order is held as an integer code in the STATUS column of the EBR_ORDER table. However the integer order status 6 in EBR_ORDER shows several different statuses in the MOC Orderes module. So what is the definition for each integer representing an order state? Why do some EBR_ORDER integer states often have different order states from the MOC view? NOTE: This
Solution: is written because occasionally Aspen eBRS administrators want to gather information by reading Aspen eBRS tables directly. Keep in mind that Aspen eBRS tables can never be written to from external applications, unless those applications use Aspen eBRS API functions to do the writing. Any modification of data in user tables or application tables (those beginning with EBR_) done any other way will result in the Database Externally Modified error (if that happens, contact Aspentech Support to help resolve that problem.) However it is fine to read the tables directly from outside Aspen eBRS to gather information.Solution Taking into account several variable conditions (for example, is a Basic Phase currently executing? Has a Basic Phase been cancelled? (thus leading to a Cancel by Phase status)) Aspen eBRS determines a final logical order status derived from the physical status in the EBR_ORDER table. The table below indicates all possible physical and logical states. Only physical states (indicated by Green) will be visible in the EBR_ORDER table. Logical states (indicated by Blue) are only shown in the Order Admin module in Aspen eBRS. Aside from reading the EBR_ORDER table directly, when using the eBRS API, GET_RAW_ORDER_STATE returns the equivalent information about physical order states, and GET_ORDER_STATE returns both physical and logical order states (see the Aspen eBRS API Programmer's Keywords: None References: Guide for more information on these functions.) Additional notes about order status 4 (Finished): The table above shows Finished (i.e. EBR_ORDER status 4) as a strictly physical state. This is correct for new installations of Version 2006.5 and later (i.e. orders that appear as Finished in MOC have physical status (4) in EBR_ORDER.) However Version 2006 and earlier had both a physical and logical Finished status. And migrated systems will preserve that earlier behavior. Read on for more information. In Version 2006 and before, orders that show a Finished status in MOC actually have physical status 6 (active) in the EBR_ORDER table. So Finished is really a logical state, not physical. eBRS determines this logical state by evaluating the Basic Phases in each order. If all Basic Phases are Finished or Skipped, then the physical status of Active (6) remains in EBR_ORDER, but the logical status of Finished is shown in MOC. For these earlier eBRS versions the only way to get a physical Finished state is programatically, using the SET_ORDER_STATE function, or to set the order to another valid state in the Order Module (the only one allowed is Archived, changing physical state from 6 to 7.) The new behavior introduced in Version 2006.5 is controllable via the FINISH_ORDER_BY_PFC_FLOW flag, which in a new installation has the default value of 1. With that default, when the flow of execution has evaluated all symbols in the PFC, the physical order state is automatically changed to 4. For these systems, no logical state of Finished exists, it is always physical. However when a system is migrated from Version 2006 or earlier, the AeBRSInstaller will set FINISH_ORDER_BY_PFC_FLOW=0 preserving the legacy behavior. This is important, to avoid having a Version 2006.5 or later system look at the order statuses, and interpret all EBR_ORDER status 6 integers as Active. Instead, with a flag setting of 0, eBRS will use the previous behavior, check Basic Phase and Condition statuses, and render the correct logical status of Finished for all orders with BPs in a Finished or Skipped state. If an eBRS Administrator contemplating a migration from Version 2006 or before prefers to have a physical status of 4 set automatically by the system for all Finished orders, the recommended approach is as follows: A. As part of the migration steps, before upgrading, use the MOC order module to filter for all Finished Orders, select them all, and choose Archive. This changes the physical status in the EBR_ORDER table from 6 to 7. B. After migration, add FINISH_ORDER_BY_PFC_FLOW=1 to the system-wide config.m2r_cfg file to enforce the new behavior. C. If for some reason the system was migrated from Version 2006 or earlier and the flag was set as FINISH_ORDER_BY_PFC_FLOW=1 it may not be possible then to change the Active status to Archived using the Order Admin module, but the SET_ORDER_STATE function could still be used to change active orders to an Archived status.
Problem Statement: How can I specify the password in a program accessing the API (for example, a SOAP client) so that it doesn't expire?
Solution: The Version 6.0 API manual incorrectly states that user accounts used for API access can be set in such a way that their password does not expire. However, that information is incorrect. The account must be active (inactive accounts will generate an error if used for API access), and by design a password can be set to a maximum length of 365 days. Therefore a specific account for API access will need to be manually managed by the AeBRS Administrator to make sure the password is renewed before automatic expiration occurs. In version 7.0 and later, this problem does not exist, since later versions of AeBRS use Windows user accounts, and not internal eBRS accounts. So a dedicated AeBRS API access could be created, and set to not expire. Keywords: Soap m2r.Util.m2rRequireException: R4106: Internal error. Terminate your work saving all data and start again. If the error persists contact the AspenTech support desk. References: None
Problem Statement: Aspen Production Execution Manager fails to connect, or displays an error dialog filled with repeated failure to initialize the web service messages:
Solution: There are a couple of configuration points the Production Execution Manager Administrator depends on: 1. List Nodename in Config Module. Using Aspen Production Execution Manager Administrator (i.e. backup and restore activity) is an auditable event. Therefore the node where the Administrator is started from needs to exist in the the Config module Workstation table so the audit trail can accurately record where the backup and restore activity takes place. Note that the Administrator specifically cares about the nodename where it is running, not the WORKSTATION key from the top of the config.m2r_cfg file. This is because the Administrator is an independent tool that can be installed stand-alone on a client workstation where the MOC module may not be present, and so no config.m2r_cfg file is present. Additionally, some networks will validate correctly based on the short nodename (MyPC), or may require a fully qualified nodename (like MyPC.MyDomainName.com.) If you have added the short nodename and the Administrator is still not connecting, try adding the fully qualified name to the Workstation table. 2. Correct WSDL Configuration. The Aspen eBRS Administrator needs access to the Aspen eBRS API, which is made available through Apache. If Apache cannot be reached, or is dead, this error results. Restart Apache and try launching the Administrator again. If it still does not connect, investigate further: the Administrator depends on the aebrs_api.wsdl file, typically located here: C:\Program Files\Apache Software Foundation\Tomcat 5.5\webapps\AeBRSserver In that file, the very last key at the bottom of the file defines the URL where the Production Execution Manager web service can be reached: To verify if it is correct, copy and paste the URL, up to the port number, into a web browser on the machine with the problem, and verify it connects to Apache: If there is any problem getting the screen above, check firewall settings. Until the URL works to connect to Apache, the Administrator will not run. 3. ADSA Entry. For both the web-based features of Production Execution Manager, and the Administrator tool, the Aspen Production Execution Manager Service must also be present and configured: Keywords: start-up startup fail connect configuration archive archiving Error Description: Initializing the SoapClient object failed. Can't initialize web service for node: Check server/port/username/password/workstation configuration References: None
Problem Statement: Error in Aspen Oil Movement System (AtOMS) when default Unit of Measure (UOM) is not defined: -464125941. Automation error Source=(modAtPM.GetBaseUOM) What does it mean and how to resolve it?
Solution: AtOMS shows this error message when creating a new movement: The error originates from Aspen Enterprise Server (EMS), but it is not very well handled in AtOMS. From EMS, this call generates an exception: AdvisorModelObj.GetBaseUOMFromUOMType(T1, UomType) This call is in AtOMSClient.exe in modAtPM.GetBaseUOM(), and propagates the exception so it is shown as in the screen capture provided above. This error is thrown by EMS because there is no Base Unit of Measure (UOM) of the requested type defined in the Advisor database. TheSolution is to define a default UOM. If AtOMS is configured (Operating UOM) to use Volume, then define a default UOM for Volume and Flow Rate Liquid. If AtOMS is configured to use Mass, then define a default UOM for Mass and Flow Rate Mass. See screen shot below from AtOMS Configuration Tool. NOTE: Please seeSolution # 120200 for steps to verify if your default UOM is defined correctly in Aspen Advisor. Keywords: None References: None
Problem Statement: Aspen AtOMS pops up the following error message when trying to create a new movement: This knowledge base article explains how to resolve this error message.
Solution: First make sure that Aspen Framework (AFW) security for Aspen AtOMS is configured properly: ? The AFW Application Name should be: Aspen Oil Movement Shipping The Group should be: UserInterface The Securable Object should be: Create You can check security permissions in AtOMSClient GUI, as shown below: If security is configured properly, then Security By Area must be verified. Use AtOMSAdmin tool to verify Shipping Security and check the Config table as shown in the screen shot below: If the Tank Security Active = 1, then security by area is enabled. TheSolution in this case is to disable security by area (set the value to 0, this is a zero) or configure security by area as shown below. Open the AFW Security Manager and verify that the application named Aspen Oil Movement Shipping Tank Areas exists. If it does not exist, use the AFW Security Manager to import the attached xml application (Aspen Oil Movement Shipping Tank Areas). After this add securable objects to Aspen Oil Movement Shipping Tank Areas, UserInterface to match the area configuration from Aspen Advisor: The Name of the securable object must be the DBINDEX of the Area in Aspen Advisor. The Description of the securable object should be the TAG of the Area in Aspen Advisor. Keywords: permission access deny denied You do not have permission to execute this action References: None
Problem Statement: Occasionally during setup, the error: AEBRS Setup Error: Unable to find JDK 1.4.0_01 JavaHome Might occur during one of the last installation steps.
Solution: This error occurs if, after installing the JDK environment, the installation process cannot successfully reference that environment. To resolve the issue, check the Environment Variables on the system, and make sure that JAVA_HOME is defined. To do this, right-click on My Computer | Properties | Advanced | Environment Variables button. If it is not defined, give it the value of your JDK install environment. For example Name: JAVA_HOME Path: C:\j2sdk1.4.0_01 Next, check the Path variable. Does it have a reference to JAVA_HOME, or to the hard-coded path, or none at all? Move the JAVA_HOME reference to the beginning of your Path. If it exists as a hard-coded path, replace it with the following: %JAVA_HOME%\jre\bin;<all other paths follow> Make sure to not delete any of the other path references in this variable, or you might disable other programs or even destabilize your computer. In fact, to just be safe, you might want to highlight the entire Path value first, copy it, switch to Notepad and paste it there as a backup, in case it gets edited incorrectly. Lastly, using Add\Remove Software, uninstall Apache Tomcat and the J2SDK kit, so they will get reinstalled. Reboot and reinstall AeBRS. If the problem persists, contact Customer Support. Keywords: References: None
Problem Statement: In Versions earlier than 2006.5, the editor control for the Basic Phase Table component: had a limit of 20 rows: Each row in the Editor defines the information for a column in the table the user will see. Is it possible to have more than 20 rows in the Editor?
Solution: Via CQ00204778, implemented in the latest Version 2006.5 Cumulative Patch, and therefore all later versions of Aspen Production Execution Manager, it is possible to increase the number of available rows in the Editor, and therefore the number of columns in the table presented to the user. This is done by adding the flag CHK_TABLE_MAX_ROWS to a config file, like flags.m2r_cfg, for example: Note that internally there has not been an actual limit of 20 columns for a screen table, but the limit to the available Editor rows in the Designer kept a programmer from defining more columns. Once the CHK_TABLE_MAX_ROWS key is defined and given a new default value, make sure to run codify_all.cmd to compile an updated set of configuration files. Close and reopen MOC to pick up the new environment values. After having set the new max to 25, new rows are available, and new column characteristics can now be defined: Keywords: table column rows limit expand increase maximum References: None
Problem Statement: Occasionally when validating a screen, an error message with the following text is displayed: OutOfMemoryError or java.lang.reflect.invocation.target exception errors
Solution: By default, the Java environment uses 64 Megabytes of your system's available memory. However sometimes that is not enough. By design, AeBRS takes a screen capture of every BP screen at validation time. This is stored in the screenshot report to provide a detailed record of all actions that take place. For the brief time that a screenshot is in memory it takes much of what is available. Here are two suggested troubleshooting steps if you are experiencing this on an eBRS system. FIRST SUGGESTION - increase the amount of available memory. When you open MOC, the AeBRS.cmd file runs. This is where we can override the 64 Meg default and change it, for example, to 128 Meg (obviously, you need to increase Java memory carefully, since you are taking memory away from your physical hardware!) One of the last lines in the file typically looks like this: START= /i %J% -cp edit it to add -Xmx128M, being careful to use a capital X: START= /i %J% -Xmx128M -cp This doubles the default 64 Megabytes to 128 Meg. After editing AeBRS.cmd, close and reopen MOC to put the change into effect. SECOND SUGGESTION - use improved algorithm provided in Version 6.0.1 (running latest cumulative Patch) and later. Many actions happen in the background when you validate a screen, or even cancel it. Since the screen capture process is happening at the same time, a reduced amount of memory is available for these actions during the validation, and an out of memory condition can occur. An improved algorithm was provided in ER BA040910A, and will be included in any later cumulative ER released for the 6.0.x product. The relevant CQ in the cumulative ER is CQ00174831, titled Screen shot generation loops on existing records to find next record id. Note that it is not enough to already know the latest cumulative ER is applied to your system. Specific action has to be taken. To apply the fix for CQ00174831, read the Release Notes and apply the tokens for your Relational Database as directed. IMPORTANT NOTE: If the Out of Memory problem is happening in an eBRS module (such as Config, Templates or Audit) since these modules are actually launched in their own separate memory space from the MOC module itself, review KBSolution 119564 for information about a similar configuration change to available memory. Keywords: java.lang.reflect.invocation.target OutOfMemoryError or java.lang.reflect.invocation.target exception References: None
Problem Statement: Error message Session Could not be identified when trying to access http://localhost/AeBRS for client.
Solution: When clients try to visit the Aspen Production Execution Manager (formerly Aspen eBRS) website, either the short or fully qualified nodename of that PC needs to exist in the Workstation table in the Config module on the Production Execution Manager server. Which version of the name (short or fully qualified) is needed depends on the network. Some administrators add entries for both, like in this example: MyComputer MyComputer.corp.aspentech.com Even if the computer has the thick client installed, and a WORKSTATION_NAME has been defined in config.m2r_cfg, this is still required, since visiting the website is a thin-client communication link that does not depend on any local files installed on the client PC. Keywords: Cannot access http://localhost/AeBRS page Access Denied References: None
Problem Statement: A powerful feature of Aspen eBRS is the ability to edit the Control Recipe. This can be useful when: ? a Basic Phase is completed, but for some reason needs to be run again. a Condition hangs, and the code in that condition needs to be edited to re-start PFC (Procedure Function Chart) evaluation and flow (for example the Condition code relies on a tag value in Aspen InfoPlus.21 that was accidentally taken offline).
Solution: Production orders in Aspen eBRS are based on certified (i.e. read-only) objects. When new orders are created a copy of the Certified RPL is created for that particular order. This copy is then referred to as the Control Recipe. The Aspen eBRS Administrator, or another user with proper security permissions configured, can then edit the Control Recipe. Valid actions are: - copy and paste a BP to a point in the Recipe flow past the current flow of execution - edit parameter values - edit any code in the Conditions of the RPL WARNING -- It is not legal to cut and paste objects, only COPY and Paste. A future version of Aspen eBRS may be enhanced to disable the paste of a cut object. Cutting and pasting objects in the Control Recipe may result in corruption, thus disabling evaluation and order flow. So cut (in other words, delete) is OK, and copy and paste are OK. Only the cut and paste action is prohibited. Editing Control Recipes and Security Permissions By default, only the Administrator can edit the Control Recipe. However UDO's can be defined in Local Security, that are then added to the RPL. This will grant a non-Administrator permission to edit Planned or Active Orders (the two possible states of the Control Recipe). If the goal is to only perform Control Recipe edits before the order is activated, the custom permission (UDO) created will be added to UDO (edit planned) in the RPL. To allow editing of in-progress orders, the permission is added to UDO (edit active). Using the Pizza Restaurant training example from the Aspen eBRS Foundation course, there are several cleaning-related tasks that recur automatically on a per order, daily and weekly basis. By granting the Shift Manager permission to edit the Control Recipe of in-progress orders, he/she will be able to add an out-of-sequence cleaning task in response to business needs. (This will result in a vast increase in Shift Manager power, since anyone who irritates him/her can immediately be assigned to mop the floor, etc., regardless of the predefined schedule.) In our example, the Shift Manager is John Smith, with a Windows login of John. John currently logs into Aspen eBRS as part of the Shift Manager role that currently has only the MOC user permission: Now let's extend John's privileges to allow Control Recipe modification for active orders: 1. Create UDO. Using Local Security, create a new UDO called ASSIGN_CLEANING: 2. Add UDO to Role. In this case John is already part of a defined role called Shift Manager. This role has already been given MOC User Permission. Now ASSIGN_CLEANING is also added. Permissions are added to Roles by going to the Permission, right-clicking it and choosing Properties, then picking the Role it should be granted to, and clicking Access: 3. Add UDO to RPL. To make this change to an existing production recipe it will be necessary to copy and paste the Certified RPL to a new version, add the modifications, then re-verify and re-certify it. Since no modifications to the Basic Phases are needed (the ability to edit the Control Recipe is a property of the RPL design) only the RPL will actually be changed. In this case, we add the ASSIGN_CLEANING UDO to the UDO (edit active) property of the new version of the RPL: Now a verification test order is created to test the modified RPL. Note that logging into MOC with John's account, the RPL Design button can be selected: But using another role, even though it has the Manage permission for Orders (i.e. all order-related permissions) the RPL Designer is not accessible, because that user's role does not have the ASSIGN_CLEANING UDO: As a final note regarding permissions, any user with Administrator privileges can edit Control Recipes regardless of the UDO structure. To verify the functionality of his new custom permission, John assigns an out-of-sequence DAILY_CLEANING_OPERATION to the current order. When John first goes to the Operation level, this is what he sees at the bottom of the operation design inside the Cleaning Unit Procedure: The Daily Cleaning Operation is contained inside a Serial construct which checks a daily expiration held in a user table. This ensures, once a Daily Cleaning Operation takes place, every order for the next 24 hours will skip the operation. John copies and pastes the Operation so it follows the STANDARD_CLEANING_OPERATION: John then exits the Designer, compiling the Control Recipe. After the CLEANING_UP runs during the production verification, John looks at the Control Recipe using Order Tracking and sees that the default DAILY_CLEANING_OPERATION was skipped as designed, but his custom-added Operation is now ready to execute after the STANDARD_CLEANING_OPERATION is finished: TROUBLESHOOTING Edits only have meaning (and will only compile!) if done at a point in the recipe not yet reached by the flow of execution. However the Design environment is not linked directly to the current order status. At the time of compilation a check is made. If an edit is done in a part of the Control Recipe that has already executed, you may get an error like this: Checking the Verification errors box from the Windows Start bar that appears at the same time, the system states: Keywords: Modified control recipe for this order is invalid due to compilation errors. Do you want to return to the design to correct the errors? If not, changes will be lost and previous control recipe will be retained. State of next element does not allow addition of any element: The design has not been verified. Please review global structure. References: None
Problem Statement: eBRS Version 2006 and later include a new Installer tool to finish eBRS configuration after running the Database Wizard. This tool takes care of not only initializing the eBRS Database connection, but also populates the Security database with the eBRS Roles and Permissions. When running the installer program several types of errors can occur, depending on your computer/network environment, and the information you supply to the installer. Review this article for advice on resolving any issues.
Solution: For the Aspen eBRSInstaller program, debug output is written to the Installer\debug directory, typically found here: C:\Program Files\AspenTech\AeBRS\Installer\debug For better security in Version 2006.5 and later, all debug files are written to All Users, so the location would be: C:\Documents and Settings\All Users\Application Data\AspenTech\AeBRS\Installer\debug As you run through this troubleshooting checklist, all debug files referred to are found in the above directory: 1. This error is displayed: Open the most recent Update Configuration debug file. In the debug file, search for the word exception. An entry like this: 10:10:25: m2rDatabaseConnection.URL : jdbc:sqlserver://DOBEASE3:1386;databaseName=AEBRS 10:10:25: Exception com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user 'AeBRS'. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source) means the account name or password being specified is not valid. Make sure you are using the AeBRS account, and specifying the password that was set while running the Database Wizard. Re-run the installer and provide the correct AeBRS account name and a valid password. 2. This error is displayed: Open the most recent Update Configuration debug file. In the debug file, search for the word exception. An entry like this: 10:20:14: m2rDatabaseConnection.URL : jdbc:sqlserver://DOBEASE3SQLEXPRESS:1386;databaseName=AEBRS 10:20:15: Exception com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host has failed. java.net.UnknownHostException: at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(Unknown Source) means the AeBRSInstaller could not find the computer where SQL Server 2005 is running. Make sure the computer name is valid (try the ping utility from a Command prompt to make sure it is reachable.) If you are not in a domain you may need to add host entries to each PC to make sure they can resolve nodenames between them. Once the error is corrected, re-run the installer, specifying a valid PC name If SQL Server 2005 is being used, use the SQL Server Configuration Manager to make sure the TCP/IP protocol is enabled, and that the port matches the port expected by eBRS (stop and start the SQL Server 2005 server before retrying the AeBRS Installer.) Even if the TCP/IP protocol is enabled, it may be necessary to run the SQL Server 2005 Surface Area Configuration utility. In that tool, expand the Database Engine node, click on Remote Connections, and verify that both Local and remote connections are enabled (default is Local connections only.) The default sub-choice of Using TCP/IP only is sufficient to resolve the above error and proceed with the AeBRSInstaller. 3. This error is displayed: Open the most recent Update Configuration debug file. In the debug file, search for the word exception. An entry like this: 10:34:44: m2rDatabaseConnection.URL : jdbc:sqlserver://DOBEASE3:1386;databaseName=AEBRS2 10:34:44: Exception com.microsoft.sqlserver.jdbc.SQLServerException: Cannot open database AEBRS2 requested by the login. The login failed. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source) means the database name specified is not correct. Use SQL Server 2005 Management Studio to verify the name of the AeBRS database. Re-run the installer and provide the correct database name. AeBRSInstaller errors related to Apache Tomcat 4. This error is displayed: The Tomcat parameters are not correct message means the wrong node is specified. Unlike the other debug entries listed above (written to Update Configuration debug file), errors related to Apache Tomcat are logged in the AeBRS Configuration debug file (in the same directory.) Open the most recent copy of that file and search for the word exception. An entry like this: java.net.UnknownHostException: dobease3A at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:177) at java.net.Socket.connect(Socket.java:507) at java.net.Socket.connect(Socket.java:457) at sun.net.NetworkClient.doConnect(NetworkClient.java:157) lists the incorrect Apache Tomcat nodename which caused the error. If you correct the Apache error, you can continue -- there is no need to restart the AeBRSInstaller program. 5. This error is displayed: Open the most recent AeBRS Configuration debug file. In the debug file search for the word exception. An entry like this: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) means either (1) the wrong port is specified, or (2) Apache Tomcat Service is not running on the specified node. Correct the error and you can continue -- there is no need to restart the AeBRSInstaller program. AeBRSInstaller errors related to Microsoft SQL Server 2005 5. This error is displayed: eBRS needs to connect to SQL Server 2005 using the TCP/IP port shown in the AeBRS Installer. The default port for SQL Server 2005 is 1433, and this is the port assumed by AeBRS. The above error means either that SQL Server is not running a service allowing a TCP/IP service, or that the default port being used is not 1433. On the machine running SQL Server 2005, run NETSTAT -b from a Command prompt. Scroll up and look for a port entry for sqlservr.exe: If you find a port, just change the AeBRS Installer screen to use the port specified, and you should be able to connect successfully (here the non-standard port 1386 observed from NETSTAT above is used in the installer): If you do not see a port listed for sqlservr.exe NETSTAT then check to make sure TCP/IP ports are enabled, and verify the port: A. Choose Start | Programs | Microsoft SQL Server 2005 | Configuration Tools | SQL Server Configuration Manager. B. Expand SQL Server 2005 Network Configuration, Protocols for SQLEXPRESS, then right-click to see the settings on the Context menu for TCP/IP connections -- make sure it is enabled: C. Once again right-click on TCP/IP, choose Properties, IP Addresses tab, and verify what port SQL Server 2005 is using: Now that TCP/IP is enabled, and the port being used is known, the AeBRSInstaller application should connect successfully. If you still cannot connect, and you are running the AeBRSInstaller from a computer other than the one where Microsoft SQL Server 2005 is installed, check the SQL Server 2005 Surface Area Configuration Manager (a separate utility accessible from the Start Menu), and make sure you are set to accept both Local and Remote connections: Keywords: Database error: Login failed for user 'AeBRS'. Database error: The TCP/IP connection to the host has failed.java.net.UnknownHostException: Database error: Cannot open database AeBRS2 requested by the login. The login failed. Tomcat parameters are not correct: dobease3A Incorrect Tomcat connection information Tomcat parameters are not correct: Connection refused: connect Tomcat parameters are not correct:Connection refused:connect Database error: The TCP/IP connection to the host has failed. java.net.ConnectException:Connectionrefused:connect References: None
Problem Statement: When the user updates strapping tables in Advisor with new calibrations, older data is being overwritten by the new data in the strapping tables, and calculations based on the old strapping tables can no longer be repeated.
Solution: Users should not delete old strapping tables or overwrite them with newer data, as calculations using the older data and tables may need to be re-run. Instead of overwriting the existing strapping tables with new calibrations, the user should simply add the new strapping tables into Advisor (with different names), leaving the older ones intact. Then, in the detail page of the tank gauge instrument, simply point to the new strapping tables on the date they are to be used. With Advisor''s date management of model changes, the tank gauge instrument will know to use the old strapping table prior to the change date and the new strapping table from then onwards. Keywords: strapping tables strapping table recalibration References: None
Problem Statement: This
Solution: illustrates which database table in the Aspen Operations Reconciliation and Accounting database contains the product states, types and classes.Solution GCOFEDST is the table in Aspen Operations Reconciliation and Accounting database that contains the configuration information for all products (feedstock) in the Aspen Operations Reconciliation and Accounting model. Among the information are the STATE, TYPE and CLASS of the product and the database fields for this information in the GCOFEDST table are FLAG_STATE, FLAG_TYPE and FLAG_CLASS respectively. The data type of these three fields is IIFLAG, which is an integer value. The translation of the picklist items to the integer equivalents is in the program code but not available in a database table. Below is the translation of each integer value to the equivalent item in the drop-down selection lists on the Details tab of the Product Configuration dialog box in Aspen Operations Reconciliation and Accounting GUI. FLAG_STATE 0 = Liquid 1 = Gas 2 = Solid FLAG_TYPE 0 = Crude 1 = Refined 2 = Chemical 3 = Other 4 = Lube 5 = Chemical II 6 = ELV 7 = Table 6C FLAG_CLASS 0 = Charge 1 = Finished 2 = Intermediate 3 = Other 1000 = Comp Charge 1001 = Comp Finished 1002 = Comp Intermediate 1003 = Comp Other Keywords: GCOFEDST Product States References: None
Problem Statement: What relational databases does V7.2 of Aspen Operations Reconciliation and Accounting (AORA, formerly called Aspen Advisor) support?
Solution: AORA is an ODBC-compliant database application, which means it will support any relational database that is compliant to the Open Database Connectivity (ODBC) standard. The newest release which is V7.2 has been specifically tested and supported when using any of the following database versions with the appropriate ODBC drivers installed: Microsoft Access: 2007 SP2 and 2003 SP3 Note: Due to limitations in Microsoft Access itself, this database is of limited use with AORA. It is mainly used in initial prototyping of AORA models and exporting of models for support purposes. Microsoft Access does not support the structured query language (SQL) outer join function, so most standard AORA reports will not run properly when running from Aspen Reporter (seeSolution 102714). Microsoft SQL Server: 2005 SP3 ( Both standard and Enterprise Edition), 2008 SP1 (only Enterprise Edition) Microsoft SQL Express: 2005 SP3 and 2008 SP1 Oracle: 9i database and 10 g database (both release 2) For more information on the Supported platforms and the Operating Systems for V7.2 please refer the V7.2 Installation guide Note: ThisSolution is only applicable to the our latest release V7.2. Usually the Relational Database support varies a little for every version. To get validated information regarding the databases supported for AORA either for versions prior to V7.2 or future releases please refer the System requirements section under the Overview chap in AORA installation Guide. AORA Installation Guide for V7.1 AORA Installation Guide for 2006.5 Keywords: Advisor database relational database ODBC database versions References: None
Problem Statement: Can Advisor perform flow compensation on raw flowmeter readings? Where is the best place to perform flowmeter compensation?
Solution: Flow compensation usually can be performed at many levels in the plant information system. Compensation for actual flowing conditions (temperature, pressure, density) can be done at the meter level, in the distributed control system (DCS), in the plant historian, and in Advisor itself. Flow compensation must be performed in order to be able to accurately reconcile the plant balances. Advisor model developers should have the flow compensation performed at the lowest level possible, as data accuracy and granularity decrease as the levels increase, with the lowest levels being at the meter and DCS. At low levels (meter/DCS), compensation can be done instantaneously if the actual flowing conditions of temperature, pressure and density are available along with the raw flow reading. The compensated flow calculated from instantaneous values should be quite accurate. At higher levels, such as in Advisor itself, only the daily or hourly averages of temperature, pressure, density and flowrate are usually available. Thus, calculations based on these average values will be inherently less accurate. Keywords: Advisor model flowmeters flow compensation References: None
Problem Statement: Is there a batch command that selects a particular zone for processing?
Solution: The SETMODELZONE command will select a previously defined Group Zone within Advisor. The selected zone will then be used for any successive batch commands. Syntax: SETMODELZONE zonename; where zonename is the name (tag) of the zone to be used in processing. NOTES: The SETMODELZONE command must precede the OPENMODEL command. The zonename must exactly match the zone name (tag) defined in Advisor. This parameter is case-sensitive. Keywords: zone batch command References: None
Problem Statement: Following are the Reconciliations statuses that are available under Expert System Preferences 'Not Saved', 'Preliminary', 'Intermediate', 'Complete', 'Finalized' and 'Locked' This
Solution: explains the purpose of above mentioned Reconciliation status.Solution 'Not Saved' appears by default before the save command is run during the process of Initialize, Reconcile, and Save (IRS) sequence. 'Preliminary' status is shown after expert results are saved. 'Intermediate', 'Complete' and 'Finalized' are not set by the system but can be set by a custom program for user-specific purposes. 'Locked' can be set to prevent other users from modifying data in the Aspen Advisor model. Please note that only users with either Superuser or Administrator access rights have the privilege to unlock the model Keywords: Advisor Reconciliation Status References: None
Problem Statement: When should a default product be assigned to a configured pipe in an Advisor model and when is it used?
Solution: The default product for a pipe should be assigned in the Details tab of the pipe object. The default pipe product is used when creating new readings for the pipe. This is the default for that reading when it is created. It is also used to determine the product default properties of a pipe, if it gets to that level. Once the reading is created there is no link back to the pipe product. If you change the pipe product the reading product does not change. If you change the reading product the pipe product does not change. Keywords: Advisor model configuration pipe object default product References: None
Problem Statement: An installation of Aspen Operations Reconciliation and Accounting results in a file called IIEQUA32.dll being placed in A A C:\ProgramFiles\CommonFiles\AspenTechShared Another three files called IIEQUA32_US,A A IIEQUA32_M15A andA IIEQUA32_M20A are also installed in A A A A A A ...\AspenTech\Advisor\Measurements The three dlls in the Measurements directory are for different measurement systems. iiequa32_us This is for the US measurement system and 60F is the base temp. iiequa32_m15 This is for the metric measurement system and 15C is the base temp. iiequa32_m20 This is for the metric measurement system and 20C is the base temp. By default, the IIEQUA32.dll in the AspenTechShared directory is identical to the US dll ( IIEQUA32_US.dll ) in the Measurements directory. What does a user need to do if they want to use one of the other measurements dlls mentioned above?
Solution: Via Windows Explorer rename the file IIEQUA32.dll in the C:\ProgramFiles\CommonFiles\AspenTechShared to such as iiequa32_orig.dll Copy the required 'measurements' dll from A ...\AspenTech\Advisor\MeasurementsA A A A toA A A C:\ProgramFiles\CommonFiles\AspenTechShared Rename the copied file to IIEQUA32.dll Keywords: References: None
Problem Statement: How to set up an ODBC connection to the Aspen Operations Reconciliation and Accounting Model (AORA)?
Solution: Following are the Steps on how to set up an ODBC connection 1. Download the models to your Local Computer. Go to Data Sources ( ODBC) through Start -->Programs -->Control Panel --> Administration Tools. Using the ODBC data source create a new DSN ( System DSN or User DSN based on your profile in the Computer. Note: When using a machine that has 64 bit OS Computers, use 32 bit ODBC driver executable 'odbcad32.exe' located in the folder C:\Windows\SysWOW64 to create the ODBC data source. 2. You can also choose Microsoft SQL or Oracle Drivers based on the type of Relational Database being used at your site. Screen shot below shows example of creating ODBC data source for Microsoft Access Database. 3. Configure the DSN pointing to the MS Access Database, copied on to the computer and complete the configuration. 4. Next, Launch AORA and go to File and Click on Open Model 5. Open the Model by Selecting the name of the DSN defined in step 2. The default username is 'superuser' with a corresponding password of 'superuser' for the demo model created using the method described in theSolution 136637. When using a different model login with the correct username and password. Now you can see the Demo model in AORA. Keywords: Data Source ODBC Demo Model odbcad32.exe References: None
Problem Statement: After the initial install, when opening MOC, the error message BPC User Not Correct displays. After dismissing the message, the application continues to open successfully.
Solution: This happens because the eBRS application expects to find a BPC program on the network to connect to. If you have not installed BPC, it is necessary to open: c:\program files\aspentech\aebrs\cfg_source\flags.m2r_cfg Search for CDM Change CDM_RESOURCE_ENABLE=1 to 0 Close and save, then run CODIFY_ALL.CMD from the same directory. Keywords: References: None
Problem Statement: On a correctly configured system, an XML query or other API-based routine that was working OK starts failing. The only error indicated in the log file is a NullPointerException error.
Solution: When this problem occurs (the underlying cause is the Server not having enough memory), the NullPointerException error message appears in the client debug files (MOC\debug\xxxxx.dbg). This happens because the server does not have enough memory to hold the data it is processing (for example the results of an XML request), so the client receives a null value for that xml result. When this happens, go to the server machine's <aebrs>\APIServer\debug\<last>.dbg file and you should find an OutOfMemory exception. To resolve this, it is necessary to increase the available memory. Both client and server-side processing happen inside a Java Virtual Machine (JVM) The Java Virtual Machine allocates the memory it needs dynamically, up to a pre-configured limit. By default, that limit is set to 64 MEG. However if you have a system with a large amount of RAM (512 MEG or greater), you may be able to avoid errors on your server and clients by increasing that allocation ceiling. Then, during occasional spikes of activity, you lessen the risk of a process crashing or timing out because it does not have enough memory. For client-side memory configuration of the JVM, please see KB 114292. For Out of Memory issues with independently executing eBRS modules (there are three, Config, Template and Audit) see KB 119564. This article explains how to change the memory allocation on the server. Before making any of these changes, make sure to exit eBRS, and stop the eBRS server by stopping the Apache Tomcat Service. Modify the registry as follows: 1. Run Regedit, and go to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Apache Tomcat\Parameters registry folder. 2. Increase the JVM Option Count value by the number of parameters you want to add. Scan the list first, to make sure that none of these parameters already exist. If they don't, then you would increase JVM Option Count value by three (3). 3. Add the new 3 keys (or modify existing values, if they already exist): JVM Option Number x+1 = -Xms32m JVM Option Number x+2 = -Xmx512m JVM Option Number x+3 = -XX:NewSize=5m Lines 1 and 3 should remain the same -- the second line (512m in the example above) is the one used to increase the amount of available memory for processing. Keep in mind that you would never increase this number beyond your available memory less 256 MEG for system memory, or ideally 512. So on a system with 512 MEG of memory, you might try 128 MB first (thereby doubling your JVM from its default of 64 MEG.) You can now restart Apache Tomcat, and reopen MOC to verify whether the fix is effective. Keywords: Freeze Hang Timeout time out References: None
Problem Statement: Depending on the environment, a policy may exist to avoid writing any data files to the C drive, reserving it only to hold actual program files. In that case, how can writing debug files to the default location: C:\Documents and Settings\All Users\Application Data\AspenTech\AeBRS be avoided?
Solution: This default path is defined by the INSTALL_PATH key, at the top of the config.m2r_cfg file. Edit the path to direct all APEM debug to the desired drive and location. After editing the path, save the changes, and run codify_all.cmd. It will be necessary to restart any MOC sessions and/or Apache Web Server to pick up the new path information. It is only necessary to create the actual folder structure defined by INSTALL_PATH. For example to send files to D:\Logs\APEM define in config.m2r_cfg INSTALL_PATH = D:\\Logs\\APEM The system will then automatically create all necessary subfolders in the APEM folder automatically. However it is critical to create the base Logs\APEM folder as a starting point, or MOC will fail to start up. Another important point -- logging directories should only be configured to Local Disk -- meaning any hard drive physically attached to the server. To gather meaningful debug when needed means the folders where debug is written should be reliably accessible. Also, even though MOC and other APEM modules could theoretically have their debug redirected to mapped drives, the Apache Web Server debug (APIServer folder) cannot be easily redirected, since it runs under Local System, and thus fails to access mapped drives. As a last note, despite the key name INSTALL_PATH there is no impact on APEM relating to installation operations, which are handled via MSI packages. This key is only used to determine the debug output location. Keywords: registry edit profile References: None
Problem Statement: Starting from V7.2 Aspen Production Execution Manager (APEM) includes New debugger for phase and script execution in Design Editor
Solution: APEM V7.2 now includes debugging capability enabled for phase and script execution from inside the Design Editor. The debugger includes the following capabilities: ? Breakpoint definition and single step execution. Variable value watch and triggering execution stop on value change. The debugger environment consists of a window with three major panes: 1. Current executing code pane: In the Current Execution Code pane, you can track the execution and manage the paused execution. 2. Breakpoint list pane: In the Breakpoint List pane, you can define and manage breakpoints. 3. Watched variables pane: In the Watched Variables pane, you can manage watched variables. Keywords: V7.2 Overview Debugger Break Point Watched Variables Design Editor Execution APEM References: None
Problem Statement: Timestamps in Aspen Production Execution Manager are always held internally in UTC format, and then converted to the local timezone when displayed. However if times from an external source (for example an XML file) are already in UTC format, how can you make sure Production Execution Manager treats them that way, and does not apply a conversion?
Solution: The key is the tz parameter in the STR2DATE function, which allows you to designate the time zone that should apply to a particular value. Here are the function parameters as shown in the Design Keywords: convert conversion transform timezone References: Guide: DATE STR2DATE (STRING strdate, STRING format, STRING tz) For this example, instead of specifying a particular time zone, we specify that MyExternalTime is already UTC: MyTimeIsUTC:= STR2DATE(MyExternalTime, dd MMM yyyy HH:mm:ss, UTC) The list of valid values for the tz parameter is not determined by Production Execution Manager, but is instead inherited from the Java environment. A quick Google search can lead to multiple sources for valid designations for this parameter. Back to our example, if MyTimeIsUTC is stored to the Production Record Manager database, no conversion will be applied, since we have already told the system it is in UTC time. And if MyTimeIsUTC is shown somewhere (for example displayed in an Expression on a Basic Phase screen) the local time zone offset would be automatically applied for display purposes, though the value itself always remains UTC internally. To capture the value in the local time zone for some other purpose, DATE2STR could then be used: s_Local_Time := DATE2STR(MyTimeIsUTC, dd MMM yyyy HH:mm:ss) since DATE2STR applies the local time zone settings when creating a string from the timestamp value.
Problem Statement: When upgrading an Aspen Production Execution Manager from a previous version to V7.1, any customization in the external_db.m2r_cfg file will be lost. This is also the type of error that may not be apparent immediately after upgrade. The failure to contact the external database will likely only be seen when a Basic Phase runs which tries to connect to the externally defined database. A common error message for that failure to make the connection is: java.lang.reflect.invocationtargetexception Also, whenever an Active Document runs, it does an automatic check of any database connections, and if it tries to then connect, a failure may occur then.
Solution: If a backup of the file itself (or the cfg_source directory) was not made before the upgrade, it will be necessary to recover a copy of the original file from backup media. Keep in mind this installation error is only a problem if in fact this file has been modified to define connection information to an external database, like InfoPlus.21. (This file has nothing to do with the connection information to the external database that holds all Production Execution Manager data.) This problem is resolved in the V7.2 installation Keywords: java.lang.reflect.invocationtargetexception 11:53:04: m2rDatabaseConnection.URL : jdbc:microsoft:sqlserver://vm1test2:1433;DatabaseName=;SelectMethod=cursor 11:53:04: Exception java.lang.ClassNotFoundException: com.microsoft.jdbc.sqlserver.SQLServerDriver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) References: None
Problem Statement: Overview of new features that were incorporated in V7.2 Aspen Production Execution Manager ( formerly called AeBRS)
Solution: The V7.2 version of Aspen Production Execution Manager contains several major new or re-designed features, and many other improvements that bolster robustness, usability, and performance. See the Release Notes for details. New Script Debugger Aspen Production Execution Manager V7.2 now includes debugging capability enabled for phase and script execution from inside the Design Editor. SeeSolution 129614 What's New in Aspen Production Execution Manager V7.2 - New Script Debugger for more information. Usability Improvements Re-usable screen components Upgrade version wizard that automatically creates new versions of BPLs and RPLs when source components have changed. Full BPLs can be Exported and Imported from Design Editor - to facilitate the use of third-party code management tools. New Thin client interface for all user interaction: With new look & feel, implements the following functionality: Order Management, Order Tracking, and Order Execution. It includes a configurable graphical overview screen and web client configuration. Hierarchical Parameter Editor Aspen Production Execution Manager V7.2 Design Editor now includes a new hierarchical parameter editor that displays PFC parameters in a logical tree structure. SeeSolution 129615 What's New in Aspen Production Execution Manager V7.2 - New Hierarchical Parameter Editor for more information. Keywords: V7.2 Overview Debugger Design Editor Parameter Editor Re-usable Upgrade Export Thin Client Web Client Web Interface References: None
Problem Statement: This knowledge base article provides a list of error codes returned by the Aspen AtOMS AdvConnect interface.
Solution: Error Description 0 SUCCESS -199 INISETC returned 0 -198 DaInitialize returned 0 -197 DaAddServer: Error Number 0 -196 DaAddServer returned 0 (no server) -195 Cannot get server information -194 Server is not connected -193 Text field not in server database -192 Value field not in server database -191 Average field not in server database -190 TextTime field name is blank -189 TextValue field name is blank -188 ValueTime field name is blank -187 ValueValue field name is blank -186 AverageTime field name is blank -185 AverageValue field name is blank -184 FINDHISX: Error Number 0 -183 FINDHISX: No history point -182 RHISASCII: Error Number 0 -181 RHISASCII: No history point -180 RHISDATAX: Error Number 0 -179 RHISDATAX: No history point -178 bad quality -177 suspect quality---rejected -176 value = +++++, ----- or ????? -99 Exception -98 Record name null pointer -97 Return value null pointer -96 Record not found -95 Time stamp conversion -94 No good points to average Keywords: References: None
Problem Statement: Clicking on PFC chart symbol on the toolbar when using Open Order Tracking for an active order produces the message: The RPL has no design
Solution: An easy workaround is to use the workstation view to execute the Basic Phase. To correct the problem permanently follow the instructions below. 'The RPL has no design' message means the .CHO object is missing. To fix the problem, select the order in the Order module and click the Designer button. This takes you into the .CHK file. At some point in the recipe past the current flow of execution, you can make a minor change such as inserting a comment, which requires text. Go to Source Code and enter something like: // My Comment Then go back to the top level and compile. When you exit the designer, it will do a validity check. If the designer is happy with the .CHK changes, it will then generate new .CHX and .CHO copies of the RPL to reflect the updated .CHK. The three components together comprise ?the control recipe?, the copy of the recipe made and used for execution of each order. The .CHX copy is used for actual execution of the Recipe. The .CHO copy is used to present the graphical view in Order Tracking, and .CHK is the editable source-code from where those two objects are derived. This problem is part of a component that has been entirely rewritten in V7.1, so this problem will not exist in later versions. If you would like additional information on editing control recipes, here is a link to another Knowledge Base article that talks about the subject: 125737: How to edit the Aspen eBRS Control Recipe Keywords: editing control recipes .CHO, .CHX and .CHK files References: None
Problem Statement: During Basic Phase Execution, an E4125 Reload error is displayed on the user's screen. In the MOC debug file lines like this appear: E4125(Reload) at m2r.DataModel.m2rDataModel.checkDBUpdateRowCount(m2rDataModel.java:2504) at m2r.DataModel.m2rDataModel.getExternalRow(m2rDataModel.java:2462) at m2r.DataModel.m2rDataModel.post(m2rDataModel.java:737) Looking before this failure, it is apparent that numbers with high precision are being inserted into the RDB: 11:29:14: GUI(IQS):getValue(VR_MYVAR1)=5.17044830334934E-4 11:29:14: GUI(IQS):getReadVar(VR_MYVAR2) 11:29:14: GUI(IQS):getValue(VR_MYVAR3)=-0.002803320052764513 11:29:14: GUI(IQS):getReadVar(VR_MYVAR4) 11:29:14: GUI(IQS):getValue(VR_MYVAR5)=5.140179779242563E-4 11:29:14: GUI(IQS):getReadVar(VR_MYVAR6) 11:29:14: GUI(IQS):getValue(VR_MYVAR7)=0.0010167709885194702
Solution: This error typically occurs when attempting to insert high precision numbers into the RDB. APEM creates a checksum to validate data that is committed to the RDB. Once committed the data is immediately read back from the RDB, and the checksum held in memory is compared to a new checksum generated from the data that was just read back. If this reload of data does not match what was originally committed, it generates the E4125 error, warning the user there is a problem with the integrity of the data. In this case the root cause of the problem is the fact that the RDB makes its own decision on how to handle high precision numbers, applying rounding, or making some other decision to drop trailing numbers it considers non-significant. So one reSolution to the problem is to investigate the RDB settings and see if it is possible to have it store all numbers precisely as inserted without applying any sort of change to those numbers. On the APEM side, this problem can be resolved by setting a maximum number of significant digits, based on what the RDB can handle. Do this via the following steps (for SQL Server databases, edit db_mcsql.m2r_cfg, for Oracle, db_mssql.m2r_cfg. However it should be noted this problem typically occurs on SQL Server.) 1. Edit db_mcsql.m2r_cfg, found in cfg_source directory 2. Add the following two flags to the file: DB_SIGNIFICANT_MARGIN = 0 DB_MAX_SIGNIFICANT = 12 3. Save changes, then run codify_all.cmd 4. Reopen MOC to apply the new settings to that session, and try to reproduce the problem. Notice that 12 is picking a specific precision, so that a number like: 0.4143429434928304+E04 would instead be committed to the RDB as: 0.414342943492 You could try tuning DB_MAX_SIGNIFICANT, making it smaller (if numbers with 12 places of precision are meaningless), or larger (but of course no larger than your RDB is able to commit then validate without an E4125 Reload error.) Keywords: References: None
Problem Statement: How can one get a data record (tag) from the Aspen InfoPlus.21 database into Aspen AtOMS?
Solution: Open AtOMSAdmin and go to Node Instrument: INSTRUMENT NAME is really the INSTRUMENT_TAG field of the database. Even though it appears as a list one can type in anything in that field. The list is there to get all instruments from Advisor but all one needs to be concerned about are two things: Specify the Advisor instrument for the LEVEL gauges. They MUST be the same tag from InfoPlus.21 (or any other RTDBMS supported). For everything else type in the InfoPlus.21 TAG NAME. Sometimes one may have that same instrument in Advisor; most of the time one will not. For CALCULATED instruments (and AtOMS knows which they are: CALC_GROSS_VOLUME, CALC_NET_VOLUME, CALC_MASS) if the instrument tag is valid AtOMS will write it to InfoPlus.21. For all other instruments AtOMS will read from Aspen InfoPlus.21. Keywords: None References: None
Problem Statement: Sometimes, when activating a movement for a new tank, the movement will remain in the Starting state, instead of Active state, and the following error is received in the AtOMS Monitor log file: 2007-01-31 16:46:42 2320 0x1 AtOMSMonitor.UpdateMovementEventsInfo1 Error 0x80004005 performing some Calculations for Movement 91910 2007-01-31 16:46:42 2320 0x1 AtOMSMonitor.UpdateMovementEventsInfo1 Unspecified error 2007-01-31 16:46:42 2320 0x1 AtOMSMonitor.UpdateStartingMovements Error 0x80004005 activating Movement 91910. Movement will not be activated 2007-01-31 16:46:42 2320 0x4 AtOMSMonitor.wt_StartMovement Ending StartMovement Thread (2320) with ErrCode 0x80004005 2007-01-31 16:48:33 756 0x8 EMSWrapper.LogAdvCalc GetTankVolumeEx Parameters: TankID = 0 ProductID = 10014 InstrID = 0 Level = 0.000000 WL = 0.000000 Density = 0.825800, UOMID = 1001 Temp = 29.400000, UOMID = 1302 %H2O = 0.000000, UOMID = 0 How to overcome the above error: 0x80004005?
Solution: If you look closely at the monitor log, the error code is 0x80004005. Then a little further down you can see the GetTankVolumeEx Parameters and see: TankID = 0 InstrID = 0 This entry is INVALID and will cause EMS to return said error. It is IMPERATIVE that the AtOMS configuration contain a valid TankID or InstrumentID, which are nothing more than the DBINDEX of the tank or the level instrument (in that tank) in the Advisor Database. The AtOMSAdmin GUI should be used (see the steps below) to correct this configuration, and should pick up the correct Instrument ID from the Advisor database. Open AtOMSAdmin GUI Go to the AtOMS configuration summary Click on node instruments in the tree Look for the node (it should be a tank) that is used in the movement. It can be the source node or the destination node of the movement Locate the level instrument and open it for edit (double-click) On the General tab select the INSTRUMENT_NAME from the list, using the appropriate LEVEL instrument from Advisor ? Click on the Detailed tab, level instrument should NEVER have INSTRUMENT ID = 0 Any movements that are already in the AtOMS database with this invalid configuration will have to be deleted manually. You will need the DBINDEX of the movement (AtOMSClient can show it) and then delete the appropriate records from the ATOMS_MOVEMENT and ATOMS_MOVEMENT_EVENT tables. Another option is to delete the tank and recreate it in the AtOMS Administrator GUI with the valid TankID and InstrID. Keywords: None References: None
Problem Statement: This knowledge base article explains why the AspenTech Oil Movement System (AtOMS) interface to Aspen Advisor may crash when processing data for a particular day.
Solution: The AtOMS interface to Aspen Advisor (atoms2advisor.exe) may terminate unexpectedly if it encounters data that it does not expect. Examples of data that may cause atoms2advisor.exe to terminate unexpectedly are: 1. Extremely large data values (e.g.,1000000000000) 2. Negative values for cases in which negative values are unrealistic 3. Duplicate cutoff events Keywords: None References: None
Problem Statement: This Knowledge Base article provides an answer to the following question: What does a circled white or yellow number (2) mean next to an active movement as seen in the Aspen AtOMS Client tool shown below?
Solution: The circled number (2) means that there is a condition called double movement. A double movement occurs when the node selected to measure the movement (remember the control element, source, destination or lineup?) has two active movements at the same time. For example, you have a movement from a manifold to a tank, and the only measurement source available is the tank (difference in tank levels for example), and it turns out that the tank has two or more active movement within the same timeframe. AtOMS will calculate each movement individually but the values will be wrong because the level of the tank is affected by two movements at the same time. The number (2) will tell you that the movement was calculated as a double movement so the total quantity cannot be calculated correctly. White: It is a confirmed double movement Yellow: It is suspected that it is a double movement Suspect double movement is marked when the overlap in times is detected on active movements. It is marked as suspect because sometimes operators forget to close a movement within a reasonable amount of time after the movement actually finishes in the field. A movement that was left active longer than it should have been can produce a double movement condition that really does not exist. A confirmed double movement is set when the overlap in times is for completed or closed movements. Keywords: None References: None
Problem Statement: Can Aspen AtOMS use tags defined by two different Definition Records for Instrument Tag configuration?
Solution: Yes!. Aspen Tank and Operations Manager allows you to configure multiple definitions for Aspen InfoPlus.21 tag configuration. For example, You can use IP_Analogdef for Plant data and PMCRealLabDef for LIMS data. AtOMS reads tag values from Aspen InfoPlus21 tags using tag name only, not by definition family. Keywords: Definitions multiple Instruments AtOMS References: None
Problem Statement: What is the meaning of an Informational message while importing data from AtOMS to Advisor using AtOMS2Advisor.exe interface?
Solution: AtOMS2Advisor is an Interface to import AtOMS Movements and Inventories in to Advisor Model. While importing the data, the following informational message would appear, if Advisor Connect is set to manual mode. The above message indicates, AtOMS data is successfully imported to Advisor Import table. Since Run Advisor Connect is not set to AUTO, you will have to import it manually using Advisor Importer. To auto import Advisor import tables in to Advisor data tables, enable the settings Auto Run Connect, so that this message will disappear and also the data will be imported in to Advisor data table directly. Keywords: AtOMS2Advisor Auto AdvisorConnect import References: None
Problem Statement: This
Solution: describes the process that goes behind a movement in Aspen Tank Operations Manager (formerly called AtOMS) going from STARTING to ACTIVE status.Solution A movement goes from ISSUED to STARTING by a command from AtOMS Client. The monitor is not involved. At every scan cycle of the Movement Monitor (Activation Interval mentioned in the AtOMS Monitor Configuration, default being 300 sec), it will get the list of STARTING movements. For each movement, 1. It will read all the instruments from the Real Time Database (RTDB (say Aspen InfoPlus.21, PI Historian or PHD)) for the start time. 2. Once it gets those values Monitor will compute all calculations, and save all the values in the AtOMS database If both operations mentioned above are completed successfully then it will update the status of the movement from STARTING to ACTIVE. If there is an error either reading the values from RTDB, or calling the calculations, it will write a message to the log file and will NOT update the status of movement to ACTIVE. Keywords: Movement status Starting Active References: None
Problem Statement: The Aspen AtOMS Client Application will not Launch on a Windows Vista machine if the user is not configured as a Domain Administrator in the AFW Security for AtOMS.
Solution: Steps to be completed to properly implement AFW Security on Windows Vista machines with Domain Users: 1. Configure the AFW Security Server with all the required Roles and Users by synchronizing the AFW with the Active Directory to get all the required Domain Users added as part of the AFW Security for AtOMS. 2. Point all the VISTA Client Machines to the Configured AFW Security Server by using the AFW Tools utility to change as needed the RepositoryServer, URL, and WebServer entries located on the Client Registry Entries TAB, and the URL in the Server Registry Entries TAB. 3. For each Domain User that is configured to use the Aspen AtOMS Applications, then add each Domain User (individually) as a LOCAL ADMINISTRATOR on the Client Machines. 4. Enable AFW Security on each Client Machine if it was previously disabled in the Registry. The AFW Security Keys in the Registry that control the security are DisableSecurity and DisableShipSecurity if they exist. These Keys must be set to 0 if they were previously enabled and set to 1. 5. On Windows VISTA Client Machines, Go to the Control Panel | User Accounts dialog form and for the setting labeled Turn User Account Control (UAC), TURN IT OFF (i.e., Disable this feature in Windows VISTA) and restart the machine. 6. On the AtOMS SERVER, log in and navigate to the AtOMS Admin Tool | Shipping | Config dialog form and Go to or Find the Tank Security Area and SET its value to 0 if it is 1. 7. ONLY IF Required, then on each Client Machine MAP a Network Drive under each Domain User to the C:\ Drive of the AtOMS Server as DOMAIN ADMIN, Save the Admin Password, and Reconnect and Login. This could or may be an optional step, but in a some customer test cases this last step was required. Keywords: Admin AFW Security AtOMS Client Domain User Registry Windows VISTA References: None
Problem Statement: This knowledge base article explains why the following error can be returned by AtOMS or Aspen Enterprise Server.
Solution: This error indicates that Oracle was unable to insert any more records in the database. After the DBA extended the size of the tablespace Oracle was back online, but Aspen Enterprise Server (AtES) had to be fixed because it does not recover automatically from Oracle errors. To do this, use the Aspen Enterprise Server Administrator to unload the model, and then reload the model again. There is a corollary error which may occur for some clients: The first thing to test is the link to the AtOMS database. For this, use Windows Explorer to open the directory where AtOMS is located and double-click on AtOMS.udl. This should bring the standard OleDB editor for the operating system, which will likely generate an error stating that the driver is not available. By default AtOMS tries to use OleDB drivers. The OleDB driver used should be the correct OEM OleDB driver for your database (Oracle, SQL Server or Microsoft Access). If the OleDB driver is missing the recommendedSolution is to install the correct driver. OleDB drivers for most versions of Oracle can be downloaded from the Oracle web site. It is recommend to install an OleDB driver for the same version of the Oracle tools that are used on site. Keywords: ORA-01653: unable to extend table ADVISOR3.OLOULOG by 1024 in tablespace ADVISOR_TS Error 0x1AD connecting to Model Class not registered References: None
Problem Statement: After a Version 7.0 install, the error Establishing BPC Connection. Please review your BPC configuration. is encountered.
Solution: This is because the customer does not have a properly configured BPC connection. If the intent is to use BPC, then install the BPC component and configure according to the Documentation provided. If you will not be using BPC, then do the following: Open flags.m2r_cfg. Change the value from 1 to 0 for this flag, so it looks like this: CDM_RESOURCE_SERVICE_ENABLE = 0 Close the file, saving changes. Run codify_all.cmd from the same directory to update your AeBRS configuration. MOC should now open correctly without this error. Keywords: References: None
Problem Statement: In many Aspen eBRS systems, orders are always in process. So when a Basic Phase Library (BPL), Recipe (Recipe Procedure Language, RPL) or Master Recipe (MR) is edited, and a new version created (making the old one Obsolete), what impact does that have on active orders in the system? Does it mean new orders can no longer be created that use any of those Aspen eBRS objects?
Solution: A key item is this flag, found in flags.m2r_cfg, shown here with its default value: EFFECTIVE_RECIPE_VERSION=0 This default value means the effective period of two versions of an RPL can overlap. If the flag is set to 1, when a new version of an RPL is Certified, the expiration date is written automatically to the previous recipe, and it can no longer be used to start new orders. However there is one other key factor - the hierarchical nature of Aspen eBRS. This dictates that objects further up the hierarchy take precedence. So even if an RPL has an expiration date being enforced by a setting of EFFECTIVE_RECIPE_VERSION=1, it could still be used by a certified MR which is not expired. This works from the bottom to the top of the hierarchy. Expired BPLs can still be used by RPLs that are not expired. The limitation would be that a new RPL being designed would not be able to add a reference to an expired RPL, and a new MR would also not be able to make a link to a expired certified RPL. In the Aspen eBRS hierarchy, the top object (above MR) is any order that exists in the system. So even if a MR is edited, saved to a new version, and the old one Obsolete, the active orders already in the system created via that now obsolete MR are unaffected: Aspen eBRS Object Hierarchy In summary, there is no risk or issue certifying new MR's, RPLs or BPLs. Orders already in progress will continue until completion, and even if objects further down the hierarchy are expired, they can still be used by objects higher up until every object from top to bottom is expired. Keywords: None References: None
Problem Statement: Tables with many rows in MOC can provoke an Out of Memory error if opened without a filter.
Solution: The first troubleshooting step for this issue is to make sure you have enough memory available for the Config module. Even if you have followed the advice inSolution 114292 to make more memory available for MOC (or for the eBRS server, 116637) the memory available for the eBRS Config, Audit and Templates module is controlled separately. Follow the advice inSolution 119564 to allocate adequate memory to the Config module. Once you have set memory appropriately you can use the MAX_ROW_COUNT_FILTER flag as a warning that your tables are reaching the maximum size your system can handle. By default, this flag has an internal value of 100. That is why as soon as any table exceeds 100 rows, you see a warning like this when trying to open it: So for example by setting MAX_ROW_COUNT_FILTER=1000 in your flags.m2r_cfg file (and not forgetting to run codify_all.cmd after!) you set a higher, more useful threshold for this flag. The flag then gives you the opportunity to apply a filter before retrieving data from the table. When the warning comes up: A. Say no to the prompt asking to retrieve the rows (the empty table structure is displayed.) B. Click on a column header to use it as a filter C. Add a filter value to the filter box, and click the filter symbol (exclamation point.) The result on this example table is as follows (a table with 101 rows filtered on 5 in the Number column returns 11 rows): Keywords: hang crash out of memory user table References: None
Problem Statement: Development of efficient, reusable eBRS code and applications is helped by having best practices shared among those working in the environment. This Knowledgebase Article is a repository to gather and share those best practices.
Solution: Aspen Services organization has made available an eBRS Best Practices document (PDF attached) with recommended development guidelines. Keywords: None References: None
Problem Statement: Orders are in an executing state and can't be deleted. Typically this might occur after logging out of a workstation.
Solution: If the problem is associated with an existing workstation: 1. Close and reopen the MOC. 2. Log in and as soon as the MOC comes up, go into the orders module and cancel the order. Note: Don't try to open the order or it will get locked up again. If the problem is associated with a workstation that is no longer on the network: 1. Change the workstation name in the config.m2r_cfg file located in \Program Files\AspenTech\AeBRS\cfg_source # Workstation Identification (has to match a name in the CHK_WORKSTATION table) WORKSTATION_NAME = WORKSTATION 2. Run codify_all.cmd 3. Continue with the procedure described above for an existing workstation. Keywords: order execute executing delete References: None
Problem Statement: Troubleshooting and General Usage Tips for Process Explorer trends displayed in eBRS Basic Phases
Solution: 1. Basic Phase with an APEX trend hangs on execution: This is typically a Version 2004.2 problem. Some specific files needed to support display of Process Explorer trends in eBRS basic phases are missing from the initial installation. Install the latest cumulative patch for Version 2004.2 to resolve this issue. 2. Do I need to create a separate APEX plot for each plot I want to display in a Basic Phase? You can create a separate plot for each BP, but if the only variable factors are the timeline span and the tags displayed, consider creating one empty APEX file, and then for each Basic Phase you call, supply the timeline control Start and End times and the list of tags. Another important point is that the Version 2006 eBRS Design Keywords: Trend hang APEX display References: Guide is the first to include information about the Variables passed during execution -- (for example, this is where you dynamically set which APEX file to display) but those variables apply to all previous versions of eBRS that include support for the APEX component (Version 6.0.1 and after.)
Problem Statement: This knowledge base article describes whether or not is it possible to delete an existing user profile from Aspen eBRS.
Solution: Profiles can be deleted except for the case when audit trail data has been associated with the profile. If audit trail data has been associated with the profile the profile cannot be deleted even if no users currently belong to the profile. Keywords: message deletion References: None
Problem Statement: When trying to change the MOC user (after successful startup of MOC), or execute a stored procedure against the eBRS API, the following error is written to either the MOC debug file in the former case, or the API Server debug file in the latter case: 14:40:51: Diagnose = Invalid username or password. Please re-enter domain\username and password.
Solution: Verify that the machine running AeBRS is part of a domain, not a workgroup. Keywords: References: None
Problem Statement: What does an icon with a red circle and a white bar in the middle for the AtOMS movements indicate?
Solution: This RED sign in the Atoms Client next to the Active Movements indicates that there are some instruments associated with the movements that have BAD status in Historian, such as Aspen InfoPlus.21. In order to check why the status is BAD we have to get the full error code that is causing this condition. For this go to the Aspen AtOMS database and look in the ATOMS_MOVEMENT_EVENT table for events associated with the Monitor service or the ATOMS_TANK_EVENT for events associated with the Tank Monitor service. If the STATUS field contains a number zero, it indicates Good status but if it is anything else, say a very strange negative number, this might represent a standard Windows error code. The Iis2atpd.dll file is the one that returns these error codes. We can use the Google search engine to search the Internet for this error code to get a description of the error. The following KB article from Microsoft provides definition of various error codes. Microsoft KB Article (Error Codes Description) There is a troubleshooting guide from Microsoft on how to resolve these errors and here is the link Error code Troubleshooting Guide Keywords: Red circle Icon Stop sign Stopsign References: None
Problem Statement: Aspen AtOMS2Advisor is an interface that imports Movements and Tank inventory to Advisor application on a daily basis. By default, this interface displays Tank attributes in Fractional Gauges and this
Solution: explains how to change the displays to Levels. Solution To change the display to Levels please go to File--> Preferences and set Fraction to '0'. Then close and reopen the Screen. Now you can see the Tank attributes in Levels. Keywords: atoms2advisor fraction level References: None
Problem Statement: This knowledge base article provides a list of Aspen AtOMS error codes for the Tank Monitor and the Event Monitor.
Solution: Descriptor ErrCode Description ERR_UNKNOWN 0x84570001 Unknown ERR_TIMERFAST 0x84570002 Timer too fast ERR_ACTIVATING 0x84570003 Error Activating Movement ERR_READING_LINE_TANK 0x84570004 Error Reading Line_Tank in AtOMS_Lineup table ERR_BADDBLIB 0x84570005 AtOMSServerDBLib version error ERR_BAD_MOVEMENT_STATE 0x84570006 Invalid Movement state ERR_RECONNECTING 0x84570007 Error reconnecting to ModelServer ERR_PROCESS_CUTOFF 0x84570008 Error processing cutoff ERR_BAD_CONTROL_ELEMENT_ID 0x84570009 Error in Control_Element_ID in AtOMS_Movement table ERR_CALCULATION_ERROR 0x8457000A Error in Advisor Calculations ERR_LOADING_RTDB_DLL 0x8457000B Error loading RTDB library ERR_RTDB_CONNECTION 0x8457000C Erorr connecting to IP.21 ERR_RTDB_OBJECT 0x8457000D RTDB connection object could not be created ERR_NO_MODEL_CONNECTION 0x8457000E Model Server not connected ERR_NO_EVENTS 0x8457000F No events ERR_COM_ERROR 0x84570010 COM error ERR_ADVISORMODEL_UNAVAILABLE 0x84570011 Advisor Model is unavailable ERR_MISSING_PARAMS_CALCNVOL 0x84570012 Not enough parameters for Advisor Calculation ERR_MODELSERVER_UNAVAILABLE 0x84570013 Model Server unavailable Keywords: Tank References: None
Problem Statement: When creating a new movement from AtOMS client after a V7.2 fresh installation or upgrade, a user sometimes sees the following errors on both server and client machines: -2147467259. External Component has thrown an exception. Source= Aspentech.MSC.AORA.Model.Wrapper (modAtPM.GetBaseUOM) Clicking OK on the error specified above gives the following error: Error: frmNewMovement.Form_Load 91. Object variable or With block variable not set Source=AtOMSClient
Solution: These errors are causes if some units of measure (UOM's) are missing in the dependant Aspen Operations Reconciliation and Accounting (AORA) model. In order to add the UOM's, login to the AORA model and from the GUI go to 1. Configure --> Global --> Units Of Measure --> Add 2. Add the default UOM for Flow Rate Mass as Kg/Day (default UOM of mass (KG) by the default UOM of Time (Day)) In the details tab make sure to select the Flow Rate Mass 3. Similarly define additional UOM's for Flow Rate Mass as appropriate ? Kg/hr, Conversion factor to 24 MetricTons/Day, Conversion factor to 1000 MetricTons/hr, Conversion factor to 24000 Lbs/Day, Conversion factor to 0.45358 Lbs/hr, Conversion factor to 10.88592 After the UOM's for flow rate mass units, try creating new movement from the AtOMS client to see if it resolves the issue. If not continue to the step 4 to add the Flow Rate Volume units to AORA model. 4. The default UOM for Flow Rate Volume is m3/Day (default UOM of volume (m3) by the default UOM of Time (Day)). 5. Create the UOM m3/Day similarly following the Steps 1 and 2 but in the details tab this time use Flow Rate Liquid 6. If necessary, define additional UOMs for Flow Rate Liquid, as follows ? M3/hr, Conversion factor to 24 Liters/Day, Conversion factor to 0.001 Liters/hr, Conversion factor to 0.024 Bls/Day, Conversion factor to 0.1587 Bls/hr, 3.8088 7. Press OK Try one more time to create a new movement from the AtOMS client and see if the errors are resolved. Keywords: References: None
Problem Statement: Configuring the Lightweight Adapter for communication between eBRS and Batch.21 is straightforward, but there can be several configuration issues that keep correct communication from happening. This
Solution: provides a troubleshooting checklist if there is a communication problem. Solution If communication is not happening, check these possible causes. The checklist is arranged from most likely to least likely cause: 1. Verify that you have unique topics defined for LWA communication. A default installation of Batch.21 and eBRS configure the same generic topic. On a network with only one LWA connection being attempted there is no further need for topic configuration. The topic is defined on the eBRS side in the file called LWA.XML, found in this directory: C:\Program Files\AeBRS\lwa.xml and on the Batch.21 side here: C:\Program Files\AspenTech\Batch.21\Data\BusinessProcessDocument\Batch21.Subscribe.xml On the eBRS side, the topic itself is not easy to see in the lwa.xml file, because it is a concatenation of variables whose values are set in the file. If you replace the variables with their values you will end up with the same string that is clearly written in the Batch.21Subscribe.xml file ServiceName key: AEP.Prod.Batch.ProductionPerformance.Publish.BU1.Plant1 A typical configuration problem is that only one eBRS and Batch.21 server are intended to communicate, but additional Batch.21 servers are present on the same network. Those additional Batch.21 Servers will automatically be listening on the bus for messages with this generic topic. The communication problem happens when one of these unintended servers picks up the message and responds to it. So even though the generic topic works fine with just one eBRS and Batch.21 server on the network, you may want to consider editing these files on both sides of the connections to ensure you always have a unique topic. That change could be as simple as changing your first eBRS/Batch.21 server pair to a Plant2 designation, so other default servers that are installed on the network are not listening for your specific topic. On a network where the wrong Batch.21 server is responding to your TIBCO LWA messages, you can verify this is the problem by looking in the API Server debug file in eBRS. First, attempt communication, then find the most recent API debug file in this directory: C:\Program Files\Aspentech\AeBRS\APIServer\debug In that file, search for the string BORKER. You will find pairs, a BORKER request (the message eBRS is trying to send) and a BORKER response (received from Batch.21) if the LWA_DEBUG key is enabled. If the Batch.21 that is answering doesn't have the same Batch area defined, response will have a <ResultStatus> key with this message: A datasource with the specified 'name' attribute was not found. It is possible that the datasource specified does not have a Batch.21 service defined. 2. Correct ADSA configuration. The API Server debug BORKER message also includes a reference to the ADSA datasource name which has your Batch.21 connection information, like: Datasource xmlns=Aspentech.Batch21 name=MyADSAName If your ADSA DSN has changed, make sure the BORKER request/response recorded in your API server debug file is using the current, correct name. If the ADSA DSN is not correct, on the ADSA Server, search the registry for the wrong name, and delete the cached ADSA information and restart the ADSA Service. Typically, cached ADSA information is found in the Registry at: My Computer | HKEY_USERS | .DEFAULT | Software | AspenTech | ADSA You can delete the entire ADSA key from here to get rid of the cached value. 3. Aspen Batch.21 BusinessProcessDocumentsService service. On the Batch.21 Server, LWA Publish/Subscribe information is handled by the Batch.21 BusinessProcessDocumentsService service. When this Service starts, it launches an executable called RVD.EXE which you can find using Windows Task Manager. First, make sure RVD.EXE is in fact running and in your Task list. If you don't find it, try stopping and starting the Service, and testing communication again. Without this executable in memory, no communication can happen. Second, the Service may successfully start the RVD.EXE executable, but the account you are running the BusinessProcessDocumentsService under may not have the correct privileges. Typically this service should run under the same account the Batch.21 server service runs under, ensuring that it has the right to both read and write Batch.21 data. 4. Make sure the RPL being used has a default mapping. Select the RPL, open the Batch Record Map editor, and accept the default choices. This establishes a default mapping if one is missing. 5. Make sure there is no other PC with the same name on the network. This problem typically happens when Virtual PC's are on the network. If someone restores a ghosted image onto the network, communication may fail. Two possible reasons for the failure are: - The ghosted image has the same unique topic as the primary server. - If the topic is unique, LWA Services are confused because the PC name is duplicated on the network. 6. Make sure the LWA_SERVICE_ENABLE flag is set to 1. In the eBRS configuration files it is necessary to set the LWA_SERVICE_ENABLE flag to 1 before any communication can happen via the LWA. As soon as the flag has a setting of 1, all eBRS activity is broadcast on the bus. Additional Information There are two additional tools that can help analyze eBRS/Batch.21 communication problems. On the Batch.21 side, there are supporting files for LWA communication, located in this directory: C:\Program Files\AspenTech\AEP\EnterpriseConnect\bin The ecrvlisten.exe utility can be used to view and capture traffic on the Tibco Infobus. From a Command Prompt, navigate to the bin folder and start an ecrvlisten.exe session by double clicking the executable file. In Point 1 of this article, you will note that the topic is a concatenation of seven elements. Use the unique element of your topic to filter bus information and listen to just the topic you care about. The other elements can be wildcards. For example, if you had changed Plant1 to Plant2 in your XML configuration files, you could instruct ecrvlisten.exe to only listen for Plant2 publish and subscribe messages. The Command Prompt window shown here is configured to pipe the results to Plant2Traffic.txt instead of just writing them into the DOS window: This file shows you what is actually broadcast onto the bus by the eBRS server, and any responses to that topic by any Batch.21 server. There is another useful utility which can help test your XML outside the eBRS context. First grab some valid XML (a good source may be the output file from your ecrvlisten.exe session) and launch the Batch21WebServiceTestApplication.exe from here: C:\Inetpub\wwwroot\AspenTech\Batch.21\bin First, replace the generic set of security credentials with the same account and password used for your Aspen Batch.21 BusinessProcessDocumentsService service: On the input xml tab, paste your test XML. If you are taking your XML from a debug file or ecrvlisten.exe make sure to clean it up -- the starting and ending tags for your XML submission should be the Datasource tags. Your XML will be different of course, but here is a generic example of a query taken from some ecrvlisten.exe output: After verifying your LWA setup using these steps, if you are still having trouble, please contact AspenTech Support. Keywords: null link 17:12:07: Error generating XML messages: 1,846157417,853119230,853119230 References: None
Problem Statement: While using Aspen Production Execution Manager, this Error is displayed on the screen: Exception: java.lang.reflect.InvocationTargetException Checking the debug, entries like this are seen: 13:47:56: Exception java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) . . . Caused by: java.lang.NoClassDefFoundError: com/aspentech/aep/ec/atadapter/interfaces/IAtAdapterMsgSend at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
Solution: This error can occur if LWA_SERVICE_ENABLE is set to 1 on a system which in fact is not communicating with a Aspen Production Execution Manager system. To resolve it, go to config.m2r_cfg on the server, search for the flag and set it to 0. Afterward execute codify_all.cmd in the same directory. Restart MOC, and test again. The error should be resolved. Keywords: None References: None
Problem Statement: You are presented with this warning message during windows startup process: Server certificate is not present in your trusted store. Do you want to trust the certificate?
Solution: Even though the warning says the certificate has expired, the problem is a false alarm from Symantec Endpoint Protection. If you have it installed and running on the same server as Aspen Production Execution Manager's (APEM's) Tomcat is installed, you may have Tomcat running on port 4338. Follow the instructions provided by kb#114638 to change Tomcat's port. Keywords: tomcat symantec certificate expired References: None
Problem Statement: After successfully launching the MOC Thick Client, a user got the following error when attempting to start the Order Tracking module (from the MOC debug file): 13:34:07: Exception java.lang.NoClassDefFoundError: org/apache/soap/rpc/Parameter at Notifier.SOAPSubscribe.buildParams(clientNotifier.java:229) at Notifier.SOAPSubscribe.<init>(clientNotifier.java:222)
Solution: A NoClassDefFoundError happens when the execution environment cannot find the jar file the executing code is looking for. In this case there is a failure to find soap.jar. The AeBRS.CMD file contains a reference to all jar files that should be needed during AeBRS operation. To find out what the problem is with the soap reference, look at the START line in AeBRS.CMD: START= /i %J% -Xmx128M -Xms32m -XX:NewSize=5m -cp %L%\Apps.jar;%L%\swt.jar;%TCLIB%\activation.jar;%TCLIB%\mail.jar;%TLIB%\soap.jar;%TCLIB%\xercesImpl.jar;%TCLIB%\xmlParserAPIs.jar;%TLIB%\AtbpoUOM.jar;%D% %1 soap.jar should be found along: %TLIB%\soap.jar TLIB is defined earlier in the file as SET TLIB=C:\Program Files\Apache Tomcat 4.0\lib so we expect to find soap.jar along the path: C:\Program Files\Apache Tomcat 4.0\lib\soap.jar. In one customer's case soap.jar was in the correct location, but the AeBRS.CMD file did not properly reference it as shown here. Keywords: References: None
Problem Statement: In the MOC or API debug files you see the error message Exception Server.api.chkVMRuntimeServer$APIException: E654:API session is not active
Solution: This message means that the workstation name is not in the MOC Tables is not present or does not match the WORKSTATION_NAME key located in the CONFIG.M2R_CFG. 1.        Open the file CONFIG.M2R_CFG located c:\Program files (X86)\AspenTech\AeBRS\cfg_source\ 2.        Check for the key WORKSTATION_NAME.  You should have the fully qualified machine name. 3.        Open MOC config module and then Workstations and verify you have the same name as step 2. Keywords: E654 API session is not active MOC Aspen Production Execution Manager APEM AeBRS References: None
Problem Statement: What are some tips and advice about working with the table control that go beyond the printed documentation?
Solution: The eBRS table control is often the most effective way to present information to the operator. ThisSolution is a place to post information about the real-world experience of working with tables. Hopefully you will pick up something useful from thisSolution. Please use the Feedback button to add your own advice, and a Web Librarian will edit your suggestions into theSolution. 1. Transparent functionality. eBRS tables that appear in Web-based Basic Phases should support all table functionality found in the eBRS Thick Client. So if you are running the latest Version 2004.2 client, or a later version, and find a digression from this standard, please submit a Customer Support Incident noting the specific problem, and a Defect will be entered on your behalf. 2. No combobox. The combobox control is not support to appear as an element within an eBRS table. There are no plans to add that functionality at this time. 3. Problem with validation. Attempting to exit a cell in a table by clicking another cell in the table triggers any code in the validation action for the first cell. If the validation code gives a RETURN_NO result, the $selected_index Environment variable should continue to hold the identity of that cell. Instead the $selected_index variable holds the new cell identity the user attempted to switch to, even though the focus remains in the problem cell. This problem is scheduled to be fixed in Version 2007, with hot fixes in Version 2006 and Version 2004.2. 4. TRANSFER_FOCUS() not applicable to tables. This function cannot be used in a validation action of a table cell to control which cell gets focus next. Keywords: table control References: None
Problem Statement: How does eBRS determine the timestamp source? What standard is used for the timestamp information?
Solution: Older eBRS versions had configuration options that allowed using the local timestamp on the client running MOC. However all supported eBRS versions now require that the timestamp be taken from the Apache Server. This is set by default, at installation time. This eliminates the variability that could occur in the batch record and audit trail by having events timestamped based on their local workstation time. The settings that retrieve this timestamp from Apache and make it available to all clients are found in flags.m2r_cfg: # Synchronized timestamp from servlet - if not defined timestamp is obtained from DB (see DB_SERVER_TZ) TIMESTAMP_HOST = DOBEASE4 TIMESTAMP_PORT = 8080 TIMESTAMP_URI = /AeBRSserver/servlet/aebrsutctime TIMESTAMP_HOST is the node where the Apache web server is installed. TIMESTAMP_PORT tells the local client from which port to request timestamp information, and TIMESTAMP_URI is the specific address on the server from where the client retrieves the time. When a call for a timestamp is made, the Apache web service returns the time in UTC, and actions in eBRS are in turn recorded to the RDB in UTC format (see below.) (The comment line shown above mentions DB_SERVER_TZ, but that is in fact deprecated. Apache is the only timestamp source!) The timestamps themselves are in UTC time format. This eliminates complexity that can result from the fact that a particular time-related event may need to be viewed across different timezones and at different times of the year (i.e. when Daylight Savings is in effect, or not.) UTC is the same across all time zones. So for example if a UTC timestamp is recorded to a database at 11:36 p.m. in one timezone, (for example Mountain Time in the U.S.), that same event when viewed from a workstation in the Pacific time zone would be viewed as one hour earlier, or 10:36 p.m. This is because the UTC value has the local Regional Settings (and any Daylight Savings values, if applicable) applied to it before it is displayed on the given client. This default configuration, using Apache as the timesource, and UTC as the format, is the preferred configuration for any AeBRS system. However, even though UTC is the default timestamp format, it is possible to store timestamps according to the local timezone: BENEFIT: This is a helpful configuration option if other programs access information in the eBRS database, but do not have built-in routines to convert UTC timestamps to the local timezone. (Keep in mind that if the routines to access eBRS data are written using the eBRS API, that UTC-to-local conversion will happen transparently, eliminating the need to store timestamps in using local time.) LIABILITY: The problem with this approach is Daylight Savings Time. In the Spring when the clock springs forward an hour, there will be a one hour gap as the local timezone moves forward. In the Fall the issue is potentially more confusing, because the clock falls back an hour, and the audit trail and Batch record will be confusing to read as a duplicate set of timestamps will be written for ongoing activity during the hour of time change. These changes happen in the middle of the night, so of course it may only be an issue for plants that operate 24/7. To make this configuration change, define the DB_RECORD_TZ variable according to the local timezone. Timestamp values are still retrieved from Apache as a starting point, and in fact internally eBRS is still using UTC, but as eBRS writes its activity to the relational database, it will convert the timestamps into the local timezone. Appendix C of the eBRS Installation manual contains a list of timezones. For example, checking the Regional Settings on an eBRS test server, it is set to Pacific Time (US & Canada.) A corresponding DB_RECORD_TZ setting would be: DB_RECORD_TZ=America/Los_Angeles For organizational purposes, a good place to add this change would be in flags.m2r_cfg following the lines that define the Apache timestamp source. Keywords: References: None
Problem Statement: On systems with large RPL's (hundreds of Basic Phases), and many workstations, customers may seek to implement specific steps which enhance performance. One parameter which increases how quickly the system processes the RPL is the CONDITION_EVALUATION_PERIOD flag, which has a default value of 10 seconds. This sets the interval for the eBRS server to check for conditions ready to be evaluated. For example if you have a Blocking/Branching condition, and you are waiting for a RETURN_YES for flow to continue, that condition will be checked every 10 seconds. If the environment changes so that the evaluation would be Yes, but the most recent condition evaluation was one second ago, it will be an additional nine seconds before that condition is evaluated again.
Solution: To increase the condition evaluation frequency, open the flags.m2r_cfg file on the eBRS server (typically found along this path): C:\Program Files\AspenTech\AeBRS\cfg_source and add the CONDITION_EVALUATION_PERIOD flag. Since this is an internal, undocumented flag, consider adding an explanatory note like this: # This flag sets how often the server checks # for conditions ready to evaluate. # See Aspentech KB #118120 for more details # Default = 10, units are in seconds CONDITION_EVALUATION_PERIOD=5 Once you have saved flags.m2r_cfg, run the codify_all.cmd executable in the same directory. The updated frequency will not take effect until the next time the Apache web server is stopped and started. IMPORTANT NOTE: Make sure to always use a non-zero number, if modifying this value. Zero is a non-valid value for this flag, because it would leave the Apache web server in a constant state of activity. Also, please note that if your RPL is so large, or you already have many conditions that are constantly becoming available to be checked, that changing this flag value may in fact not give you a meaningful increase in responsiveness. In that case, contact Aspentech Support to discuss further tuning parameters which may be effective for your system. Keywords: enhance performance References: None
Problem Statement: An error with the AMS Version 2004.2 release media means that important configuration files will be overwritten by the upgrade process, and some incorrect jar files are added to the Apache directory. Please follow the steps in this Article together with the eBRS 2004.2 Installation Guide to ensure you do not lose this critical information. If you are doing a new install instead of an upgrade, this Article does not apply. In that case follow the eBRS 2004.2 Installation Guide to successfully complete a new installation.
Solution: 1. Before installing AMS Media Installer for AeBRS 2004.2, create a back up of all configuration files (these are files with the extension m2r_cfg). These files are located in the directory: C:\Program Files\AspenTech\AeBRS\cfg_source 2. Execute AMS Media Installer for AeBRS 2004.2 following the guidelines described in the Installation Manual. 3. Once the installation of AeBRS 2004.2 has been finished, restore your original back up of the configuration files (with extension m2r_cfg) to the cfg_source directory (documented in Step 1, above.) and run codify_all.cmd in this same directory by double-clicking on it. 4. Stop the Apache service and navigate to the location of your AeBRS jar files in the Apache folder structure. A typical path would be: C:\Program Files\Apache Tomcat 4.0\webapps\AeBRSserver\WEB-INF\lib Drag and drop all jar files contained in the zip attached to this Article, replacing the current ones found in the lib folder. There are a total of 6 jar files. 5. Start the Apache service and execute the DBWizard to finish the upgrade process. Depending on the version you are upgrading from, the upgrade process will further modify your configuration files (saving the current ones with a .back extension.) At this point your eBRS system should be correctly upgraded to a functional 2004.2 system. ADDITIONAL NOTE: Please refer to the Patch Release Notes for Patch 2004.2.1 or later for descriptions of other Defects resolved by the jar files in thisSolution. Specifically, these jar files resolve: Defect Description CQ00214105 When phase execution terminates (End of Session message hidden by .Net shell) all support for dispatching events is closed CQ00219876 Object repositories are not closed when DBWizard is closed. CQ00219878 Batch Record Objects with _ in their names were not supported by the migration. Following this Knowledgebase Article while upgrading to Version 2004.2 is equivalent to installing Patch 2004.2.1. However if a later Patch is available on the Support website (like 2004.2, update BA________, for example) install that later Patch to add further fixes to your eBRS installation. Keywords: Error: E4602: m2rException_4602 ORA-01006: bind variable does not exist EB9BD087B3FE8ACA104D9E52380A8B1A called null with arguments: Arg[0]=5.0.1 Arg[0]=6.0.1 E4602: m2rException_4602 R4003 References: None
Problem Statement: How to determine which Temporary Tables are generated and used when executing selected Aspen Operation Reconciliation and Accounting (AORA) Reports, or when creating Custom Reports from the Standard Crystal Reports? REASONS FOR ASKING: We are modifying some of AORA's Standard Report Files to build our own Custom Reports but are having some problems trying to verify some of the report field and value linkages between the final report data that is generated into the Temporary Tables vs the corresponding source data that resides in the Actual Database Tables. We have also noticed on occasion that some of the Temporary Tables remain behind in the AORA database after running Reports in the Reporter instead of being automatically deleted upon closing the reports and exiting Reporter. This we have noticed leads to report execution errors during subsequent report runs if the Temporary Tables left behind from the previous runs are not first deleted. As such we would like to know which Temporary Tables are generated for each Standard or Custom Report so that we can better troubleshoot which reports are resulting in the Temporary Tables remaining in the database.
Solution: IMPORTANT: Please read entirely all instructions and notes provided below before proceeding with the documented Temporary Table determinationSolution procedures. The procedure required to View and/or See and Test specifically which Temporary Table or Tables get created for each Standard Report or Custom Report is to proceed as follows: 1. First DELETE all remaining Temporary Tables from the AORA database if any still exist.Solution ID 125252: How do I delete the Aspen Advisor Database Temporary Tables left behind after running Advisor Reports? http://support.aspentech.com/webteamasp/KB.asp?ID=125252 2. Next put the Reporter in Design Mode. This will result in the Temporary Tables remaining in the database after successfully running and closing out a report. With the model open in Reporter Click File | Preference... from the File Menu. Select / Click the Advanced Tab located at the top of the Preferences dialog form. Check the Set the reporter into design mode check box. IMPORTANT: Remember to take the Reporter OUT of Design Mode when you have completed testing. 3. Then run each of the selected Standard or Custom Reports (RPT Files) one at a time and use either your RDB (SQL Server or Oracle) Management Tools or the Aspen DBTools to review which Temporary Tables are created in the AORA database when each selected report is executed. Example: Open the Advisor Model (Database) in the Aspen Advisor DBTools Application. Run the following SELECT Query using the Aspen Advisor DBTools Application, or by using the query tools available per SQL Server or Oracle corresponding to the installed relational database type you are using. SELECT * FROM sysobjects WHERE NAME LIKE 'R%'; Executing the above Query will provide you with a List of all the Temporary Tables (R Tables) that exist in the database after executing a selected Standard or Custom Report (RPT File). OTHER IMPORTANT NOTES: You will need to make sure that in between each report run you are deleting the Temporary Tables generated by the previous report execution, so that there is absolutely no confusion about the associated tables created for and used by each report just in case there are reports that utilize the same Temporary Tables. Now assuming you already have a good understanding of the AORA Database Tables (purpose of each table and type of data stored in each table) then the Linkages between the generated Temporary Table data used in the Reports and the corresponding Source data that exists in the AORA Database Global, Physical, Configuration, Reconciliation, Simulation, and other Data Tables can then be determined as follows: 1. By first noting the field names used in the Temp Tables. 2. And then using the AORA Database Help File (aspeniidb.chm) and verification queries as needed to then determine for the report data in the Temp Table the corresponding link back Source tables and data based on the match up of the noted Temp Table field names and values with the respective database table field names and values (for pipes, vessels, instruments, etc.). For an Example of resolving the actual data source linkages as per the last two steps noted above based of testing applied to the DSRPD.RPT (Standard Daily Sales and Receipts) report execution, then please review the following supporting knowledge baseSolution http://support.aspentech.com/webteamasp/KB.asp?ID=131488 Keywords: Custom Reports Database Help Design Mode Reporter Temporary Tables References: None
Problem Statement: What can one adjust if the Reporter shows other components but not the parent component?
Solution: ThisSolution is quite simple. Go to File | Preferences | Display and set the ''Number of Parent Levels'' field to zero. This tells the reporter to report ALL components (the parent level is the lowest level). A common mistake is to set this field to 1 which shows all levels EXCEPT the parent level. Keywords: Component Missing Parent Child References: None
Problem Statement: When a new Aspen Advisor model is created, what is the default username and password?
Solution: When a new Aspen Advisor model is created, the default username is 'superuser' and the password is 'superuser'. This default password should be changed after the first time the model is opened. Aspen DBTools can be used to modify the password by opening up Aspen DBTools => File => Change password. You can also use Aspen DBTools to add additional users and grant the desired permission for the new user. Keywords: None References: None
Problem Statement: After installation of Aspen Advisor, the following errors are noticed when one attempts to run the expert system. AionDS/PC: Setup table not found - NLS.AES Error Starting the expert system - Error #19217 Error starting session for document
Solution: These errors indicate that the path to the expert system has not been defined correctly. Make sure that one has the correct path to the expert system. Do this as follows: Log in to Advisor as superuser or an equivalent. Go to File | Preferences and select the Expert tab. The Expert Path should point to the Expert directory in the Advisor directory structure. The installation default is: C:\Program Files\AspenTech\Advisor\Expert NOTE: If the Advisor software is installed on another computer such as an application server, the Expert directory should reside on the local workstation. This is because the Expert directory is the holding directory for files created and used in the expert system reconciliation process. If the Expert directory is a shared directory, there could be file conflicts if more than one user runs the expert system at the same time. Select OK, close the model and log back in as a different user that does not have superuser privileges. Once this is accomplished try reloading the expert system. If the above does not work, try the following: Make sure the Expert directory also contains a file called OPTIONS.REC. Make sure that the AION variable in the AION.INI file in the Windows directory points to the INIT directory under the ADVISOR base directory. See below for an example: AION=C:\Program Files\AspenTech\Advisor\Init FOR ADVISOR V7.0.0: NOTE: If you are running Advisor v7.0.0 and having problem starting the Expert System even after you have applied thisSolution, then you need to download and apply Service Pack 1 for AMS v4.0 in order for the expert system to work. Due to a defect in Advisor 7.0.0, the expert system will not work unless either SP1 is applied or the least squares mathematical reconciliation engine is licensed and installed. SP1 for Advisor 7.0 can be downloaded from knowledge base article 106046. FOR ADVISOR V7.1.0: NOTE: If you are running Advisor v7.1.0 and having problem starting the Expert System even after you have applied thisSolution, then you need to download and apply Service Pack 2 for AMS v4.1 in order for the expert system to work. Due to a defect in Advisor 7.1.0, the expert system will not work unless either SP2 is applied or the least squares mathematical reconciliation engine is licensed and installed. SP2 for Advisor 7.1 can be downloaded from knowledge base article 108238. FOR ADVISOR V8.0 & LATER: Starting with Advisor v.8.0, the Advisor model configuration tool is licensed separately from the expert system. If one installs the model configuration tool without installing the expert system -- yet attempts to start the expert system, the message Error starting expert system will be returned. The most obvious clue to a non-installed expert system is the absence of the expert subfolder under the Advisor folder. Keywords: Expert AION.INI References: None
Problem Statement: Aspen Advisor can be used to track product composition in tanks. If product tracking was not initially set up when the model was created, it can be added to the model later. A start date must be select to begin track the products in the tank, it is best to select the start of a month to do this.
Solution: As an example, tracking is going to begin April 1st and the model is using Fractional Gauges for the tanks. The first step is to manually initialize the product compositions in the tanks on March 31st. Open the model in the Advisor GUI and change the time to March 31. Initialize the composition readings in the fractional gauges. This can be done by the following steps: From the Advisor menu go Data | Instrument Readings | Inventories | Fraction Gauges... and the Instrument Reading Gauge Fraction dialog window pops up. In the Instrument Reading Gauge Fraction dialog box, select the Product Comps view. Select the instrument gauge to be initialize with the product compostions. Double click on the instrument or select the instrument and click the OPEN button to bring up the Product Composition Listing dialog window. Click on the =0 icon button on the Product Composition Listing dialog and this will enable you to manually enter the compostion values. To enter the composition value for the Product just double click on the product or select the product and click the OPEN button to bring up the dialog box to modify the composition. Once the product compositions have been initialized for the tank gauges, the Simulator can now be run to foward the Product Compostions to the next day. The first time you run the Simulator, use the following options: For Simulate Values, select Product Comps For Simulate Equipment, select Vessels and Pipes For Simulation Strategies, select Update from reading and the rest can be left as default. On April 1st and every day thereafter, run the Simulator with the following options: For Simulate Values, select Properties, Components, and Product Comps For Simulate Equipment, select Vessels and Pipes For Simulation Strategies, select Linear blend and the rest can be left as default. After the model has been closed you can run the appropriate reports in the Advisor Reporter such as the Daily Tank Prod Comp report by volume and Daily Tank Prod Comp report by mass. Keywords: None References: None
Problem Statement: Where is the density of air stored in Advisor? Can the default value for the density of air be modified?
Solution: The air density is stored in the OLOCONFG table. There is no way to modify the air density through the Advisor GUI, however, the air density value in OLOCONFG can be modified through a SQL script or by manually changing the value in the OLOCONFG table. Keywords: References: None
Problem Statement: The Advisor simulator has two strategies for simulating properties to the next accounting day. This article describes how to configure individual properties to be simulated by the linear blend algorithm or be updated from readings.
Solution: You can simulate physical properties, components, and product components for all tanks and streams defined in an Advisor database model by using the Simulator tool. The Update from readings strategy simply propagates today's physical property, components, and/or product component values forward to the next accounting time interval. Today?s property, component, and/or product component values may represent actual readings entered by the user for the current accounting interval or values which were previously simulated during the previous accounting time period. The system will take the simulated values for yesterday and then update them with the latest readings on the tanks and pipes for today. Only values that are entered as readings for today are used. No GC calculated, product default, or global default values are used in the determination of the simulated results. The Linear blend strategy calculates physical property, component, and product component values by linearly blending the values through tanks, blend headers and manifolds. No blending will occur through process units (columns & reactors) since compositional and property data across columns and reactors changes significantly (the linear blending algorithm doesn't apply for this case.) The linear blended values on the pipes exiting the process units (yields) are not calculated and will be updated from the current readings that have been entered for them. It is possible to apply the linear blend strategy selectively, based on an individual property. This is useful for cases in which a linear average of the property values doesn't make physical sense. For example, it isn't accurate to linear blend temperatures and pressures. More realistic temperature and pressure values will be generated by updating these properties from the previous day's readings. Here is how to specify that a particular property not be linear blended. 1. In Configure | Global | Properties select the property then double click on it. Click on the Details tab. 2. Change the Linearity attribute to 'Non-Linear'. When the Linearity is set to non-linear, the linear blending algorithm will not be applied to this property. Instead, the properties which are set to be non-linear will automatically be updated from the previous day's readings when the simulator is run with the linear blend strategy. Note: If there are many user-defined properties defined in the model, setting the properties which should not be linear blended to Non-Linear will significantly reduce the amount of processing time required for the daily simulation process to complete. Keywords: References: None
Problem Statement: You get the Error E4147 message from a MOC debug , Config Debug or popup Error message from MOC. Sample 1. This sample from the MOC debug file 05:27:24: DataModel.chkDMBatchRecord$Step.EBR_RECORD_STEP_47.Invalid record signature:ID_ORDER=-1335578485,ID_RECORD_STEP=-1335527345,ID_STEP=0,STEP_REP=1,STEP_RELATIVE_REP=1,ID_PARENT=null,ID_RECORD_OPER=null,START_DATE=2007-07-10 05:50:46.0,END_DATE=null,SIGNATURE=858355234 05:27:24: E4147 Sample 2. This screenshot from the MOC client tool.
Solution: - This message refers to the the APEM database which may have not been compiled correctly , corruption or tampering of the DB records. - string characters invalid for DB ? e.g., accented characters,... - precision problems with numbers ? no APEM tables has floating point numbers: this problems may happen with user-tables, only To resolve the issue they can do the following. Solution 1: Open CMD prompt and go to c:\Program Files(x86)\AspenTech\AeBRS\ and use the command to fix the problem table > sign.cmd EBR_RECORD_STEP Solution 2: In windows explorer navigate to c:\Program Files(x86)\AspenTech\AeBRS\cfg_source and with Notepad open the Flags.m2r_cfg. Add or Edit the flag CHECK_ROW_SIGNATURE_ON_LOAD=0 This will disable all signature checking. Keywords: E4147 Database has been modified externally MOC References: None
Problem Statement: By default the MOC client will create a debug file and can take up Hard Disk Space on server or Client machines.
Solution: You can purge the debug files by adding a Flag/Key in the config.m2r_cfg file located in C:\Program Files (x86)\AspenTech\AeBRS\ folder. This key will remove debug files at set amount of days. If you wish to keep the debug files then you will need to use a script or batch file to move the log files to a repository if you wish to use this key. Open the config.m2r_cfg located in the folder above in notepad and add the key DEBUG_PURGE_PERIOD=xx. Where 'XX' is the number of days. For example 30 days. Save the file and and make sure you run the Codify All batch file in same folder to process the configuration files. This key will remove debug files at set amount of days Keywords: DEBUG_PURGE_PERIOD Log file APRM Aspen Production Execution Manager Debug MOC References: None
Problem Statement: When attempting to run MOC you receive the message: AFW tools test page loads fine, indicating it's not a connectivity issue and other workstations are able to run MOC without any problems. Then if you try to refresh the cache in AFW Security Client tool, you receive this error:
Solution: Most likely the local cache is corrupt and needs to be recreated. Delete the contents of this folder: C:\Documents and Settings\All Users\Application Data\Aspentech\AFW Then proceed to restart Aspen Security Client service. You should be able to open MOC now. Keywords: MOC run-time error refresh cache afw References: None
Problem Statement: In V7.3 and earlier under some rare circumstances a deadlock may occur, causing multiple MOC clients to freeze.
Solution: This has only been reported on APEM systems that use SQL Server as their RDB. To avoid this issue, set the following flag in the SQL Server specific configuration file, db_mcsql.m2r_cfg, for each client: AVOID_DB_SELECT_IN_TRANSACTION=1 Future versions of APEM should be enhanced to include this flag setting automatically. Keywords: Database error: Transaction (Process ID 78) was deadlocked on lock resources with another process deadlocked on lock resources 04:09:27: Exception com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. References: None
Problem Statement: To help attain/maintain good performance in an Aspen Production Execution Manager (APEM) database that has been in production for some years, it is a good idea to archive and then delete order activity from the system. It is a three-step process, (1) mark orders with archive status, then (2) create an external archive package of data for those orders, upon which the orders then have Ext. Archive status, and then finally (3) any order with Ext. Archive status is eligible to be deleted. Deletion will shortens indexes, making current order-related activity faster to retrieve. But what exactly is removed from the system after these steps are followed?
Solution: The information removed from the database is essentially everything that was added to the external archive package: 1. Control Recipe. That is a copy of the RPL that was made at the time the order was created. The RPL contains the entire structure of the order. That instance of the RPL is essential to always preserve, because over time RPLs can be edited and modified. The Control Recipe not only is a picture of the RPL at that moment in time, but also preserves any Control Recipe edits that were possibly done while the Order was in production. For large RPLs, this RPL instancing for every Order can start to use significant DB space over time. 2. Screenshots. Even moreso than the Control Recipe, all the screenshot reports that were generated for each order are also part of the external archive package. This is likely the single greatest savings in terms of DB space, since it is graphical images. 3. Order-related audit information. This additional information about any auditable activity related to the order is archived and removed also. RPLs and BPLs themselves, and all their versions are never removed from the system, since their existence is essential to the identification and history of the system itself. They are not considered a potential cause of performance degradation. What is key is to put in place a strategy to archive the daily activity on the system -- order activity -- on a scheduled basis, so the system does not grow with no management. Keywords: archival reduce size optimize References: None
Problem Statement: In the Workstation module in MOC users may only want to see order phases that are in the 'Ready for execution' status. By default you cannot filter statuses so you will see all phases of all orders you have selected.
Solution: To resolve this and see only 'Ready for execution' you will need to add a flag to the server. Follow these steps to make changes to all clients. 1. On the APEM server open the flags.M2r_cfg file with Notepad. This is typically located in c:\Program files(x86)\AspenTech\AeBRS\cfg_source 2. Add the key WORKSTATION_SHOWS_READY_PHASES_ONLY=1 3. In same folder run the file Codify_all.cmd and this will commit the changes. 4. Close MOC and reopen for changes to take effect on clients. Keywords: MOC AeBRS Aspen Production Execution Manager Workstation Ready Status Flag References: None
Problem Statement: When you upgrade a Aspen Production Execution Manager 2004, 2004.2, 2004.2.8 and 2006.0 to a Version 8.5 database or Version 8.7 database you get the error message An internal error occured within Wizard.dll.Error: Error HRESULT E_FAIL has been return from a com component and Aspen Production Execution Manager database has NOT been successfully upgraded.
Solution: This error will occur when selecting versions 2004, 2004.2, 2004.2.8 and 2006.0 to Version 8.5 or 8.7 using the Aspen Database upgrade wizard and then trying to upgrade database. You will see two error messages as show below. To resolve this issue you will need to edit the XML file called ProductionExecutionManager.XML This file is located in C:\Program Files\AspenTech\DatabaseWizard\Configuration for Windows 2012 Or it maybe possible the file is located in the below locations. C:\Program Files(x86)\Common Files\AspenTech\DatabaseWizard\Configuration for Windows 2008 1. Open this file with Notepad or an editor and scroll down until you see the upgrade section 2. Look for you version of database as shown in image above. For example you will see <Upgrade from=8.0.0 to=8.0.0.2.> 3. Remove the decimal at the end only of the 4 different versions you see in the Red box above. 4. Save the XML and close the wizard and run again. This should allow you to continue to upgrade. Keywords: APEM AeBRS Upgrade 2004 HRESULT Database References: None
Problem Statement: When upgrading a Aspen Production Execution Manager database with the AeBRSInstaller tool you get the error 'PROC design compiler terminated;.
Solution: This error refers to a BPL (Basic Phase Library) that is failing to find one of the defined scripts or the specific script is present but the compiler fails to compile the script. The migration process can complete although will show this error. You can view the error messages in more detail by looking at the Diagnostic log for the AeBRSInstaller tool. This is located in the folder by default in; C:\ProgramData\AspenTech\AeBRS\Installer\Debug You will be able to see which BPL is failing from the log files and then in MOC create a new version of the BPL. Run the installer again to compile and the new version created should resolve this issue. Keywords: PROC Compile APEM AeBRS MOC References: None
Problem Statement: When attempting to open http://WEB21SERVER/Aspentech/OrderManagement you receive the following error message: HTTP Error 404.2 - Not Found The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server.
Solution: First make sure you have the right components installed in accordance with kb#129413 - How to properly configure IIS in Windows 2008. If the installed features are correct, open IIS Manager, click on the server name and open the ISAPI and CGI Restrictions. Now make sure the ASP.NET 4.0 feature is set to ALLOWED. If you are unable to visualize it, you can manually add these items and set their path accordingly: Keywords: ASP.NET 4.0 Error 404.2 Not Found ISAPI CGI Restrictions References: None
Problem Statement: In Version 2006 and later the AeBRS Installer takes care of configuring and verifying communication with Apache Tomcat, creating the Security structure in Aspen Local Security, and initializing communication between the eBRS API and the Relational Database. If any failure is encountered when the installer is creating the roles and permissions in Local Security use these tips to troubleshoot the problem. If the only failure is creating the Local Security roles and permissions, there is no need to re-run the AeBRSInstaller program once the troubleshooting steps below have resolved the issue. For a new installation, doubleclick on AFWCreate.cmd, found in the AeBRS directory. For a migration from a Version before 2004, doubleclick on AFWMigrate.cmd.
Solution: 1. Check AFW Tools to make sure the URL path is pointing to the right security server (pointing to the wrong or non-existent server can result in the chkusernotuser message being recorded to the MOC debug file: http://DOBEASE4/AspenTech/AFW/Security/pfwauthz.asp This path for example expects a security server on node DOBEASE4. If this link does not reflect the right nodename, edit it to change the nodename, and then retry the AFWCreate.cmd script. 2. Often re-registering the AFW Security Client Service on the Security Server will resolve the issue. KB article 116439 has detailed steps on re-registering the Security Client Service. Once the Service is re-registered 3. Make sure the account that you are running under on the machine where you execute AFWCreate.cmd has rights to access the AFWDB.mdb file on the Security server. Go to the afwdb.mdb file location and check the file properties for access rights. The typical behavior when there is a file right problem is that the script executes extremely fast (less than five seconds.) C:\Program Files\AspenTech\Local Security\Access97 After correcting any problems, try rerunning AFWCreate.cmd again. If you still do not get the roles and permissions created in the Security Manager, then gather the following troubleshooting information and log a call with Aspentech Customer Support: A. Enable verbose logging for the eBRS Security DLL. To enable verbose logging, run REGEDIT from a Command prompt on the eBRS server. Navigate to HKEY_LOCAL_MACHINE|SOFTWARE|AspenTech|AeBRS folder. Add a new key to the AeBRS folder (string value) called DLL_Verbose, and then edit the key to give it a value of 1. After verbose logging is enabled, every interaction with Local Security will add to a log written to the root of the C drive with a unique filename that includes date and time like this: AEBRSAFW_DLL_08102007_110301_Log.txt B. Verify on the Security Server that Worldwide Web Logging is on. Check under C:\Windows\system32\LogFiles\W3SVC1 to see if WWW log files are being generated. If logging is not enabled follow Microsoft KB 300390 to enable logging. Save the attached file AFWEventReport.txt file to both the eBRS Server and the Local Security machine, and change file extension to .VBS. (The file needs to be renamed because a firewall may not allow the download of a file with a .vbs extension.) After setting up logging using steps A and B above, restart IIS on the Security server, and then run AFWCreate.cmd. After AFWCreate.cmd has finished running, double'click AFWEventReport.vbs on both machines. Create a zip file containing: 1. The log file of type AEBRSAFW_DLL* created at the root of the C drive on the AeBRS Server. 2. The most recent WWW log file, found at C:\Windows\system32\LogFiles\W3SVC1 3. The debug file generated by AFWEventReport.vbs (this log file will be found in whatever directory you executed the script.) Attach this zip file to your new Customer Support incident. Keywords: None References: None
Problem Statement: By default Aspen Production Execution Manager (APEM) message dialogs, such as MESSAGE(), ERROR_MESSAGE() and CONFIRM() only allow one line of plain text. Is it possible to have a more useful, multi-line display?
Solution: Within APEM itself there is no provision to send anything other than a plain text string to an APEM message dialog. However because the APEM coding environment itself is built in Java, it inherits some Java functionality. By embedding simple HTML in the string sent to MESSAGE(), some formatting is possible. The following APEM code examples display messages like this: s:=<html><b><u>T</u>wo</b><br>lines</html> MESSAGE(s) s:=<html> s:=s+<center><font color='#FF0000'>Alert!</font><br> s:=s+Quality deviations detected.<br> s:=s+Check with Supervisor before approving Lot.</center> s:=s+</html> ERROR_MESSAGE(s) To enable this functionality, it is necessary to set specific values for the DIALOG_MESSAGE_WIDTH flag. By default it is 100. For Version 2006 and earlier, set DIALOG_MESSAGE_WIDTH=10000 For Version 2006.5 and later set DIALOG_MESSAGE_WIDTH=0 If the flag is not set, the first 50 or 60 characters will detect HTML formatting, but the rest will be ignored. In these later versions, by setting a non-zero setting that matches the text area, any text will also be auto-wrapped. For example: DIALOG_MESSAGE_WIDTH=150 Keywords: warning eBRS hard return concatenate References: None
Problem Statement: When Aspen eBRS flags are set so that communication happens via IIS (see KB Article 121383 for more details) there may be connection problems. This
Solution: will record different problems and reSolutions related to Aspen eBRS / Aspen Batch.21 communication using IIS. Solution If IIS communication method fails, try using the Aspen Batch.21 Web Service test utility to submit the XML manually. KB Article 119035 has details on using the test utility. If there is a communication failure, the output xml tab will give you information that should help in troubleshooting. For example, if the output XML tab shows: The server has encountered an error while loading an application during the processing of your request. Please refer to the event log for more detail information. Please contact the server administrator for assistance. this indicates a problem with IWAM account synchronization. To resolve this issue see Microsoft Knowledge base Article 255770. Keywords: BATCH_21_MOC_DIRECT References: None
Problem Statement: Does eBRS have hard-coded limitations? For planning an eBRS
Solution: it is useful to know if the product itself contains any limitations to the recipes, Basic Phases, tables, etc. Solution Like many Aspen Manufacturing Suite products, Aspen eBRS does not contain hard-coded limits. The expandability of the product in terms of Recipe complexity, Basic Phases, User Tables, etc. is determined by a combination of environment factors, like network speed, CPU power, memory of the PC's involved, and ultimately the efficiency of the code in the product itself. Keywords: capacity limit maximum References: None
Problem Statement: If the Apache server is not running, condition evaluation does not happen, an Aspen eBRS clients cannot progress past whatever Basic Phase they are processing at the time the server goes offline. The Watchdog is a process that pings the server during normal client operation, verifying availability. Sometimes the Watchdog process is too sensitive and gives false warning messages when the system in fact is functioning correctly. To turn off the Watchdog on systems experiencing this issue review the following information:
Solution: The primary Watchdog test is to see if the Apache server responds in any fashion. This is the basic existence-of-connection test (by default set to 60 seconds): WATCHDOG_PERIOD=60 This is the basic watchdog flag and, if you turn it to 0 you are disabling all watchdog warnings. If WATCHDOG_PERIOD is set to something other than zero, then the other flags matter: WATCHDOG_WAIT: (specified in seconds) Controls how much time the MOC client gives Apache for its response window to the WATCHDOG_PERIOD query before raising a warning. WATCHDOG_SERVER_EVALUATION: This is a separate thread that runs on the server itself. It watches Apache and makes sure that no more than 60 seconds passes between times that condition evaluation happens. However if WATCHDOG_PERIOD is zero, any results from that process just go into debug on the server, and do not get returned to the client with that zero setting. WATCHDOG_EXCEPTION_DETECT: A YES/NO flag (based on values 1 or 0) that controls whether or not any exceptions happening during Apache condition evaluation are returned to the clients (this can be a useful warning, since normally condition evaluation, happening on the server, does not have a MOC environment to pop up and warn of problems.) This flag was originally set as a built-in feature, but the flags allow customization, important for environments that are functioning fine but give false warnings. Keywords: None References: None
Problem Statement: Are there plans to integrate the Aspen Production Execution Manager (APEM) audit module into Aspen Audit & Compliance Manager?
Solution: Beginning in V7.1, while the internal auditing component of APEM remains present (with no current plans to remove it), it is possible to also publish the APEM audit information to Aspen Audit & Compliance (AACM.) The feature itself is optional. It allows customers who already use AACM for their other auditing-enabled products (like Aspen InfoPlus.21 for instance) to maintain one central point for collecting and reviewing audit information. Please refer to the New Features section of the V7.1 Release Notes here for a brief technical overview of how the feature works. Keywords: None References: None
Problem Statement: For customers in timezones affected by the recent change in U.S. Daylight Saving Time rules, it is necessary to update the Java environment which eBRS depends on. This enables eBRS to correctly handle the change in the day when the spring forward and fall back time shifts happen in 2007.
Solution: The following steps should be applied to all versions of Aspen eBRS 6.0.1 and later. Note versions older than version 6 were delivered with JDK prior to version 1.4.0 for which access to TZUpdater need to be contracted from Sun Support. 1. Stop Apache tomcat server and any AeBRS modules. 2. Apply Microsoft patch KB931836 for OS. The patch is available from the link: http://support.microsoft.com/default.aspx/kb/931836/ 3. Download TZ Update Tool from Sun web site for targeted JDK. It can be downloaded from the link: http://java.sun.com/javase/downloads/index.jsp * For JDK older than 1.4.0, please call Sun support for assistance. Please read the followings before applying the patch to understand general issues. http://java.sun.com/developer/technicalArticles/Intl/USDST/index.html http://java.sun.com/developer/technicalArticles/Intl/USDST_Faq.html 4. Apply Sun patch by running tzupdater. Read the README file in the patch for detail instructions. In case of multiple versions of JDK/JRE installation, the patch needs to be applied to each version of JDK/JRE. Tip: please run the patch by using specific java.exe. If you have unzipped the tzupdater to the directory, for example C:\Temp\tzupdater2007a Switch to the directory above and run the tzupdater as: C:\j2sdk1.4.0_01\bin\java.exe -classpath . -jar tzupdater.jar -u -v C:\j2sdk1.4.2_04\bin\java.exe -classpath . -jar tzupdater.jar -u -v If you have JRE installed without JDK, you need to apply the patch for that JRE as well for other Java applications. C:\j2re1.4.2_10\bin\java.exe -classpath . -jar tzupdater.jar -u -v 5. Restart Apache tomcat server, and AeBRS modules as needed. Keywords: Service Pack Engineering Release time change daylight savings dst day light saving References: None
Problem Statement: What are the different communication methods to read/write data between Aspen Production Execution Manager (APEM) and Aspen Production Record Manager (APRM)? What are the advantages/disadvantages of each?
Solution: Starting with V7.2, APRM has an ODBC driver. This allows Batch data to be queried from APEM code using SQL-type SELECT * FROM * WHERE syntax. The initial V7.2 release is read-only. For V7.3, the plan is to enable writes via the same ODBC driver. Once writes are enabled, the easiest communication method between APEM and APRM will be to write SQL code to read/write APRM data. This ODBC-driver based method is the latest generation of APEM/APRM communication methods. The rest of this article covers previous methods. The previous communication method, still supported, is to form XML documents and submit them to the APRM server. This communication followed two general paths: Direct: BATCH_21_MOC_DIRECT=1 or via Apache: BATCH_21_MOC_DIRECT=0 The Apache communication path is legacy. BATCH_21_MOC_DIRECT was added in Version 2006 (and via cumulative patch can be added to a Version 6.0.1 or Version 2004.2 system.) Once this flag is set to determine a path, the communication itself happens via one of three transport methods: 1. Using the Lightweight Adapter (LWA) 2. Using IIS 3. Using API access Therefore valid flag combinations are: Direct Access Using the Lightweight Adapter (LWA) BATCH_21_MOC_DIRECT=1 LWA_SERVICE_ENABLE=1 Using IIS (default method; other flags set to 0) BATCH_21_MOC_DIRECT=1 Using API access BATCH_21_MOC_DIRECT=1 BATCH_21_USE_API=1 Access via Apache Using the Lightweight Adapter (LWA) BATCH_21_MOC_DIRECT=0 LWA_SERVICE_ENABLE=1 Using IIS (default method; other flags set to 0) BATCH_21_MOC_DIRECT=0 Using API access BATCH_21_MOC_DIRECT=0 BATCH_21_USE_API=1 This direct communication method is also available in Version 2004.2 by applying the latest cumulative Patch. The direct path is recommended by Aspen because communication is faster when Apache is eliminated from the communication chain: History of APEM/APRM Communication by Version APEM Version 6.0 was the first version to feature communication to APRM. This was done via the BATCH_21_SERVICE_ENABLE flag. With this flag enabled, APEM automatically assembled and sent XML to the APRM Web Service (via Apache) containing the same information already being written to the APEM batch record. The BATCH_21_XML_QUERY function (created at the same time to take advantage of this new feature) was designed to let a programmer selectively retrieve data from APRM. The design idea was that all data going to APRM was automatic, and data retrieval was as-specified by the programmer (see below for more detail on BATCH_21_XML_QUERY.) However a limitation of this architecture was that for a givenSolution having all Batch Record information written to APRM might not be desirable, and furthermore, the data was written in a way that parallels the S-88 structural similarity between APEM and APRM (i.e. Unit Procedure information to a higher batch level, Basic Phase information to a lower Batch level: And in a real-world application specific pieces of information derived from an APEM screen (perhaps the Batch Yield from the Batch Record level) might be more useful as a characteristic in APRM at a higher batch level, but they were locked into a static relationship with the corresponding Batch Level 5. APEM Version 2004 featured the next generation of communication with APRM, adding the TIBCO Lightweight Adapter (LWA) technology as a communication layer choice, controlled by the LWA_SERVICE_ENABLE flag. LWA communication is considered a low-complexity configuration method, since all APEM data can be broadcast to an APRM system by changing just a couple of file settings. LWA attempted to improve on the Version 6.0 architecture by introducing a Mapping Tool. The mapping tool allows lower-level characteristic information (like the Batch Yield characteristic example) to be mapped to a higher level in APRM. The mapping tool works as designed, but since unique mappings need to be specified for every APEM screen when there is a need to move data to a different Batch level, mapping can rapidly become a bottleneck inSolution development workflow. Also keep in mind these version 6.0 and 2004 flags are mutually exclusive -- you can choose one or the other, but should never set both to 1. Both flags are still supported in later versions for existingSolutions that need to be migrated, but Aspen's recommendation forSolutions built in Version 2004 or later is to leave both flags to 0 and enable communication via the flags described in the Direct Access or Access via Apache tables above. What are the Advantages of BATCH_21_XML_QUERY? Typically only a subset of the information generated by APEM is needed by APRM. So an alternative to broadcasting all APEM Batch Record information to be picked up by the APRM Web Service or the Lightweight Adapter is desirable. BATCH_21_XML_QUERY works by submitting XML documents assembled via APEM functions. The XML being submitted can either be writing to or reading from APRM (yes, writing, even though it is called BATCH_21_XML_QUERY) Whenever a BATCH_21_XML_QUERY submission is made it returns an in-memory response accessible by a handle. A set of APEM functions can then be used to extract specific information from the in-memory response. Keep in mind that legacy communication methods are still supported. So to efficiently use aSolution based on BATCH_21_XML_QUERY keep: LWA_SERVICE_ENABLE=0 to avoid carrying BATCH_21_XML_QUERY requests and responses over the reliable but slower LWA bus. Also make sure BATCH_21_SERVICE_ENABLE=0 to avoid writing all Batch Record information to APRM. The two most efficient configuration choices for performance are: Example A BATCH_21_MOC_DIRECT=1 BATCH_21_USE_API=1 or Example B BATCH_21_MOC_DIRECT=1 BATCH_21_USE_API=0 Example A is the configuration choice with the highest performance possibility, but will require installing the APRM API on every MOC Thick Client. (This is done by choosing Software Development Kit under the Aspen Production Record Manager product from either the Client or Server install path on the aspenONE installation DVD.) Example B performance should also be very good and gives a much lighter footprint on each client workstation with lower maintenance in regards to installation time and Patch updates. No APRM components need to be installed on the MOC client since APRM is being contacted via IIS using the APRM Web Service. Keywords: None References: None
Problem Statement: Access to eBRS is managed through Windows user accounts. But when running the Database Wizard a Database account name AEBRS is created. What is this account for?
Solution: The AeBRS account is the account that AeBRS itself uses for access to the relational database. Keywords: References: None
Problem Statement: For programming, a non-proportional font has advantages. It can especially improve readability when a line contains both single and double-quotes. Courier is the new default in Aspen Production Execution Manager V7.1: On the other hand, non-proportional fonts take up more space for the same number of characters, so it is a trade-off.
Solution: Aspen Production Execution Manager changed the default font to Courier, favoring readability over saving space. However, the previous default font, Arial, can still be used, by commenting out this key in flags.m2r_cfg: SOURCE_TABLE_EDITOR_FONT = Courier New, 0, 14 After commenting out the line with a number sign, run codify_all.cmd. The next time client machines reopen MOC, the programming environment will be in Arial: Keywords: coding References: None
Problem Statement: Well designed RPLs are driven by parameterization. What are the character limits for the various parameters?
Solution: Up until Version 2006.5, some parameters were limited to 256 characters: chkDMElemParam chkDMMastRecipParamRaw However in Version 2006.5 and later those RPL parameters, and all others allow 2000 characters: EBR_RPL_ELEMENT_PARAM EBR_MASTER_RECIPE_PARAM EBR_ORDRPL_ELEM_PARAM Keywords: dimension maximum allowed allowable References: None
Problem Statement: When creating external archive packages in Aspen Production Execution Manager using the Administrator tool, the archive file created will automatically be placed in the following folder: C:\Documents and Settings\All Users\Application Data\AspenTech\AeBRS\BR_ARCHIVE Can that destination be changed?
Solution: The folder destination is defined by the ARCHIVE_ROOT key, which is internal to Aspen Production Execution Manager. To give it a value other than the default specified above, define the key in a Aspen Production Execution Manager configuration file (like path.m2r_cfg or flags.m2r_cfg.) If writing an API program, define ARCHIVE_ROOT in your program with a preferred path, and it will be used. If you are not familiar with defining Aspen Production Execution Manager keys in config files, see the first section of KB article 115510 for instructions. Keywords: archive path archive destination References: None
Problem Statement: When using SQL Server, the default DB_URL line that defines the connection and connection properties is as follows in Aspen Production Execution Manager (formerly eBRS) Version 2006 and later: DB_URL = jdbc:sqlserver://{0}:{2};databaseName={1} This setting is found in the db_mcsql.m2r_cfg file, typically located here: C:\Program Files\AspenTech\AeBRS\cfg_source The default setting means some indexes are not being used. As Production Execution Manager DB size grows, this can lead to slower performance.
Solution: Modifying the line like this: DB_URL = jdbc:sqlserver://{0}:{2};DatabaseName={1};SelectMethod=cursor;SendStringParametersAsUnicode=false allows indexes to be used, which in some cases gives a significant performance gain. (Keep in mind that the above DB_URL is all one line; depending on your web browser your view may show it broken across two lines.) This modified DB_URL configuration will be included in Production Execution Manager V7.2 release planned for Spring 2010. For V7.1 and earlier, the change needs to be put in place by an Administrator. This change is still being verified for non-English systems. If your language is other than English it is suggested to validate this change in a test environment before adding it to a production system. As a final note, after updating the db_mcsql.m2r_cfg file it will be necessary to run codify_all.cmd and restart any open Production Execution Manager client MOC sessions. ADDITIONAL NOTE: On some older Production Execution Manager systems (6.0, 2004, 2004.1 and 2004.2) the DB_URL line may look like this, because of a difference in the JDBC version: DB_URL = jdbc:microsoft:sqlserver://{0}:{2};DatabaseName={1};SelectMethod=cursor ThisSolution still applies in that case. The rule-of-thumb would be to leave your existing configuration the same, and just add the missing parts at the end: the SelectMethod part if it is not there and, most importantly for potential performance increase, the SendStringParametersAsUnicode parameter, so with the alternate jdbc configuration shown immediately above, the end result would be: DB_URL = jdbc:microsoft:sqlserver://{0}:{2};DatabaseName={1};SelectMethod=cursor;SendStringParametersAsUnicode=false Keywords: screenshot performance custom speed slow increase References: None
Problem Statement: Sometimes an Aspen Production Execution Manager (formerly Aspen eBRS) order may go to a Cancelled by Phase status: Once in this status, no Basic Phases are available for execution. Why does an order go to this status? How can the order be reactivated?
Solution: When an order goes to Cancelled by Phase status, it can be for the following reasons: 1. Switching off the PC without exiting the Basic Phase currently executing on the screen. 2. The Java environment or Windows itself has a failure, and the PC needs to be rebooted. 3. An operator executing a Phase has chosen Cancel Phase (the blue X) instead of Stop Phase (the red X) in the Workstation module: To reactivate the order open the Order module, select the Order in Cancelled by Phase status, and then click on the Phases tab: The Phases tab contains a list of all Basic Phases for the current order, including their current Status and other useful information, like what workstations are currently executing phases. Highlight the Basic Phase that changed to a Cancelled state for reasons 1, 2 or 3 above, and choose Reactivate (the ability to reactivate a Phase is controlled by Local Security permissions, so if the Reactivate button is grayed out, make sure the account being used has administrative privileges): Controlling Cancelled by Phase Behavior The flag CANCEL_PENDDING_WORKSTATION_OPER determines how Production Execution Manager handles workstations that hang or otherwise get disconnected while executing a Basic Phase. (As a minor note, the flag itself does contain a typo. It is in fact PENDDING, not PENDING. The flag is found in flags.m2r_cfg.) The default value of the flag is 1. With this setting, upon log-in of the workstation, the Basic Phase that was executing goes to a Cancelled status, and the entire order gets the logical status Cancelled by Phase. This default setting ensures an Administrator is involved in analyzing why a workstation suddenly disappeared off the network and can take steps to troubleshoot the issue. If the flag is set to 0, when the workstation logs in the Basic Phase will go back to an Enabled/Ready status, and the Order maintains its Initiated (i.e. in progress) status. No Administrator intervention is required. Additional Notes What happens if the workstation which crashed had a serious hardware failure, and it is not possible to bring it back online quickly? In this case, you may be left with a Basic Phase that shows as Executing in Order Tracking, with no obvious way to change its status.In a Production Execution Manager system with only thick clients (i.e. all workstations run Basic Phases via a local install, not via web-based phases), take the following steps: 1. Edit the WORKSTATION_NAME key in the config.m2r_cfg file on some workstation. Give it the same name as the crashed workstation. 2. Save those changes, and run codify_all.cmd. 3. Open MOC and log in. Depending on the setting of CANCEL_PENDDING_WORKSTATION_OPER, this will revert the Basic Phase to Cancelled status (so it can be reactivated, following the instructions above) or Enabled/Ready. Additionally, the whole order status may change too, depending on the flag, and can be managed using the information above. Steps 1-3 also work for web-based Basic Phases. Just set the WORKSTATION_NAME key equal to the fully qualified name of the PC with the hardware failure that was running the web-based Basic Phase. As an alternative, re-evaluation and update of all Basic Phase statuses can be provoked via an Apache restart, but an Apache restart means all MOC clients will also need to log out and log back in again, and any web-based Basic Phases would be killed, so this should be the troubleshooting method of last resort. Keywords: Cancel Phase Status Initiated freeze crash hang eBRS References: None
Problem Statement: When using the Print Design option within the Designer: the output will typically be directed to HTML (using Internet Explorer) or PDF (using Adobe Acrobat.) Internet Explorer automatically formats the columns of code in a table with auto-wrap enabled, so regardless of the width, it displays correctly: Auto-wrap in small Internet Explorer window Auto-wrap in larger Internet Explorer window However when output is directed to PDF, Adobe Acrobat does not auto-wrap columns, so longer lines of code may be cut off:
Solution: There are a couple of approaches to resolving the PDF printing problem: 1. Perhaps the most straightforward is send output to Internet Explorer, and then choose File, Print, and send output to a third-party PDF generator, like CutePDF for example. 2. It is also possible to leave the output flag set to PDF format (EBR_FORMAT_DOCUMENT=1, a flag found in path.m2r_cfg), but direct output to Internet Explorer -- in that case as long as Internet Explorer has a plug-in supporting PDF display, it will display the Print Design output. However the wrapping problem remains: 3. When sending to PDF either directly, or via option 2, use the flag PRINT_DESIGN_ACTIONS_COLUMN_WRAPS to set a specific column width and enable the wrapping property. This flag takes an array of values. It is found in flags.m2r_cfg. For example setting the values: PRINT_DESIGN_ACTIONS_COLUMN_WRAPS=10,10,10,40 PDF output of the same code looks like this: Additional Note: To change the output default between Acrobat and Internet Explorer, enable the BROWSER_PATH_DOCV and BROWSER_PATH_DOCP flags for the application you want to use, and comment out the ones you do not (found in config.m2r_cfg): Default output sent to Internet Explorer Default output sent to Adobe Acrobat Keywords: None References: None
Problem Statement: Debug key settings are read on startup by both the MOC clients, and also by the Apache server. For a MOC client, changing debug keys does not have such a great impact, since only the client needs to be restarted. But enabling and disabling debug keys for troubleshooting server issues is a bigger problem, since a restart of Apache forces a restart of all client machines to re-establish their connection to recipe and condition processing. And for systems with thin clients (i.e. running the order management or order execution web pages) an Apache restart immediately destroys those sessions, so the impact is even greater.
Solution: CQ00321586, introduced in the 2006.5.2 patch, allows changing Apache debug keys without the need to restart Apache. The new key controlling this feature can be added to a Production Execution Manager config file to take advantage of it. Here is an example entry added to flags.m2r_cfg: # CQ00321586 is available in 2006.5.2 and higher. # It allows change of Apache debug keys without need to restart Apache # Default is 0, meaning no check. >0 enables the key, unit is seconds SERVER_DEBUG_KEY_RELOAD_FREQ=120 Of course when the flag is added, a codify_all.cmd will be required, and an Apache restart also. But after that server-related debug keys can be added, a codify_all.cmd executed, and Apache will then pick up the changes within the specified polling period, in this case, every two minutes. In this screen capture of the API Server debug log below, the SQL_DM and SQL_UTIL keys had been added, resulting in a verbose debug showing the SQL statements executed during Recipe evaluation (by default every 10 seconds.) At 03:28:02 modified debug keys are picked up that eliminate SQL_DM and SQL_UTIL, resulting in a seven minute gap until they are added again at 03:35:29: Keywords: troubleshoot continuous down References: None
Problem Statement: In older versions of Aspen Production Execution Manager (APEM), parameter editing happens in a flat table format, even though parameters themselves exist at multiple levels of the RPL, and so are therefore hierarchical. Old Editor: How can parameters be presented and edited in an easier, more intuitive way?
Solution: 1. V7.2 introduces the new Hierarchical parameter editor which presents data grouped logically, displayed like a hierarchy: 2. When you define a range, it is displayed in a drop down list so a user cannot select a wrong value: 3. Range list can now be set from queries (e.g.: mMDM, User Tables?) AFW and also config/lexical keys. 4. Boolean parameters are automatically setup with a range (Yes/No or Yes/No/Null.) 5. Data entry is much easier, and does not require special knowledge of the APEM format: a. No need to enter double quotes for a string b. Boolean parameters are set by drop down list c. Arrays are presented as a table d. When you have several arrays grouped together, then each item is also grouped which prevents a mismatch while entering parameters. 6. Mapping to upper level is handled automatically. Keywords: None References: None
Problem Statement: SetOrderParamDetails exists, but what about GetOrderParamDetails?
Solution: A function like GetOrderParamDetails has not been created. However as a work-around, to obtain a list of order parameter expressions, the following query will work: SELECT * from EBR_ORDRPL_ELEM_PARAM where ID_ORDER=(SELECT ID_ORDER from EBR_ORDER where CODE = 'CM' and REPETITION_NO=1) and ID_ELEMENT=0 properly replacing order code and repetition. If only one repetition exists then REPETITION_NO=1 could be skipped. Keywords: None References: None