question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: How is water component treated in the BLOWDOWN Analysis tool in Aspen HYSYS V9 and V10? | Solution: The default program handling of water has changed in V10.0 compared to V9.0.
In V9.0, BLOWDOWN does not have the ability to handle accurately the aqueous phase. Water is automatically mapped when present in every BLOWDOWN analysis and by default assumed to be miscible in the hydrocarbon phase.
Excessive water in the system breaks this assumption and can lead to thermodynamic flash errors and other convergence problems.
As a general practice, we recommend removing all water from the BLOWDOWN initial inventory before running the analysis. Experiments have shown that water is largely an inactive bystander in the depressurization analysis. Removing water also has the advantage of providing the most conservative temperature predictions.
In V10, water is by default removed from the BLOWDOWN component list. A message on the Mapping tab notifies the user of this:
This BLOWDOWN analysis is partially accurate and just consider the system without any water.
Note: This could also be done in BLOWDOWN V9.0 by removing water from the mapped components, obtaining the same result as above.
If user would like to include water in the analysis, the component must be manually added to the BLOWDOWN component list by clicking on the Edit Map button on the Mapping tab. Note that doing so assumes that water is miscible in the hydrocarbon phase, and it is not recommended to make this assumption for previously stated reasons.
Alternatively, a free water option has been added to V10.0. This option can be enabled by checking the box Global Free Water Phase on the System tab.
If this option is checked, water does not appear on the BLOWDOWN component list, and a message on the Mapping tab indicates that free-water assumption, which enables the free water phase to be considered for the analysis.
Keep in mind that the BLOWDOWN Analysis tool is intended for depressurization of hydrocarbon systems. It should not be used to analyze the blowdown of full-water systems.
Keywords: BLOWDOWN, depressuring, safety, water, mapping
References: None |
Problem Statement: Is it possible to use SPYRO pyrolysis kinetics with Aspen Plus? | Solution: Aspen Plus offers EOSPYRO available in a USER3 model to be linked with an externally generated file via the SPYRO program. The SPYRO program, which has pyrolysis kinetics, is a separate program licensed by Technip (formerly KTI). Aspen Plus does not have this kinetics within its databases.
Note 1: Starting on V11, Aspen Plus is now a 64-bit program. It is possible to use 32-bit SPYRO 6 from 64-bit Aspen Plus using a wrapper interface file which is included with Aspen Plus. To use 32-bit SPYRO, ensure that SPYRO is not installed inside the C:\Program Files folder, as this folder is only available to 64-bit programs.
This wrapper slightly changes the interaction with SPYRO, and some changes in configuration files are needed. A new line in the file for SPYRO Subr Path is needed in the .CFG file (see below), and the Furnace Config File line has to include the full path to the file. In addition to this, in the rtopt.opt file, remove the entire line referencing USRKTI.dll. See Using SPYRO section on the Help Guide.
SPYRO Subr Path = <string>
The full path to the folder containing USRKTI.dll. This is only needed to use 32-bit SPYRO from 64-bit Aspen Plus.
Example: USRKTI.dll is installed as C:\Spyro\Spyro6\USRKTI.dll and the line should read:
SPYRO Subr Path = C:\Spyro\Spyro6
Note 2: With the Aspen Plus V11 Cumulative Patch 1 (CP1), the USER3 model SPYRO interface has been enhanced to include support for KS7 kinetics (original component slate only) available in SPYRO 7.
The enhancements to the Aspen Plus SPYRO interface have been implemented and tested using a preliminary (unreleased) version of SPYRO 7. The correct functioning of these enhancements is contingent on the availability of the appropriate version of SPYRO 7 on the userβs computer.
Note 3: The implementation for V11 and V12 was for the preliminary SRTO7 (7.7.6 and before). The support for SRTO7.7.7 and later was added for V12.1 and higher.
For more information on SPYRO, please see the following external link: https://www.spyrosuite.com/
Keywords: EOSpyro, SPYRO, Pyrolysis kinetics, KS7, VSTS 554970, VSTS 707856, VSTS 1336452
References: None |
Problem Statement: When you must re-install any Aspen InfoPlus.21 software, we recommend that you first do a clean uninstall of all Aspen software, before proceeding to re-install. This is especially useful if you encountered some problems that require you to re-install the software. | Solution: First, uninstall ALL AspenTech software from Start-> Aspen Configuration-> Uninstall AspenTech Software. Choose Select All, and click Uninstall to proceed. Be sure to uninstall the AspenTech common components when prompted to do so. Once you are done, reboot the machine.
After rebooting, delete any remaining Aspen folders on the machine.
For Windows Server 2016 and Aspen Software 64-bit install:
* delete C:\program files\aspentech (may be present in more than 1 drive)
* delete C:\program files\common files\aspentech shared (stop the NobleNet portmapper service first)
* delete C:\ProgramData\AspenTech
Note: If you have any important files in these locations, be sure to save a backup elsewhere first.
Next, go to the Registry Editor (Run-> regedit) and delete the following AspenTech entries in the Registry:
* HKEY_LOCAL_MACHINE\Software\AspenTech
* HKEY_CURRENT_USER\Software\AspenTech
For Windows Server 2016 and Aspen Software 32-bit install:
* delete C:\program files (x86)\aspentech (may be present in more than 1 drive)
* delete C:\program files (x86)\common files\aspentech shared (stop the NobleNet portmapper service first)
* delete C:\ProgramData\AspenTech
Next, go to the Registry Editor (Run-> regedit) and delete the following AspenTech entry in the Registry:
* HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AspenTech
After that, reboot the machine, and you are done. You may now proceed to re-install the Aspen InfoPlus.21 software.
Keywords: IP.21
IP21
InfoPlus21
install
References: None |
Problem Statement: How do I model and optimize a Spray Dryer in Aspen Plus? | Solution: The attached Aspen Plus V8.8 demo will show you how an industrial spray dryer can be modeled in Aspen Plus and how such dryer can be optimized to reduce its energy demand while also staying within the product specifications. There is an associated PDF to guide you through the steps.
This example will cover
Β· Basic description of the spray dryer model
Β· Model the use of a gas cyclone and filter to improve product recovering and increase throughput
Β· Demonstrating the Aspen Plus sensitivity and optimization capabilities to determine optimal operating conditions to reduce the dryers total energy demand while staying within product specifications
Β· How reducing the drying agent flow rate and heater outlet led to a reduction in the energy demand by 12.5%.
Keywords: Solids Capabilities, Unit Operations, Drying, Formulation, Energy optimization
References: None |
Problem Statement: What is the procedure to delete a record from Aspen InfoPlus21 database? | Solution: This article describes about the detailed procedure to delete any record from Aspen InfoPlus21 database.
Due diligence should be taken before deleting any record (specially device records or tag records). The reason being, once a record is deleted from the database, the deletion is permanent and cannot be reverted.
After deleting any record from database, it is not necessary that recreation of the record with same name will also use the same record ID that was used earlier. Hence, the history data associated with the tag record could not be read as history would be stored using record ID.
Important note: Even when the tag record is deleted from the database using below procedure, the history data still remains within the database. There is no straight forward method to read the history data of the tag, once it is already deleted from the database.
Below is the procedure to delete a record from database:
a. Stop the archiving process for the tag. Remove the historian name from IP_REPOSITORY and turn OFF the IP_ARCHIVING.
b. Remove the reference for records. Right click on the tag name and select remove references.
c. Make the tag unusable, once the references are removed successfully. Right click on the tag name select make unusable
d. Delete the tag. Right click on the tag and select delete.
The above procedure will delete the tag from database.
Keywords: Delete tag from database
InfoPlus21 database
Tag deletion
References: None |
Problem Statement: How to configure an Aspen OnLine project with Aspen Plus EO model | Solution: In this tutorial, the steps to deploy an AspenTech engineering model for supporting plant operation online with Aspen OnLine are shown.
Β· How to prepare an Aspen Plus EO model for online deployment
Β· How to create and configure Aspen OnLine project to connect model with real time plant data
Β· How to troubleshoot failed online model execution
A fully configured Aspen OnLine project based on a C2 splitter performance monitoring application developed in Aspen Plus is used as an example to illustrate detailed steps.
Keywords: Aspen OnLine, Configure, Tutorial, Aspen Plus model, Online deployment, Online Operation Support
References: None |
Problem Statement: How to simulate freezing in an Aspen Plus simulation? | Solution: There are two ways to simulate freezing in a simulation:
1. Use the Reaction Chemistry to handle the formation of the solid as a salt precipitation reaction with appropriate K-Salt.
2. Use the RGibbs unit operation block to calculate the solid phase equilibrium.
Chemistry Method:
Aspen Plus can handles formation of a solid as a salt precipitation reaction with an appropriate K-Salt.
The Electrolyte Wizard can be used to include ice formation as a reaction by checking the Include ice formation box under Options. Then, the component ICE(S) of component type Solid will be added to the component list, and a Salt reaction with K-Salt parameters will be added to the Chemistry:
Reaction
Type
Stoichiometry
ICE(S)
Salt
ICE(S) <--> H2O
ln (Keq) = -62.705 + 1681.4/T + 10.317 ln (T) + -0.004851 T
T in Kelvin
An example of using Aspen Plus to predict the amount of ice formed at different temperatures for a 18 wt. % Methanol-WaterSolution is attached. Note that there are no ions in this system.
The freezing point of a 18% methanol-waterSolution is -11.13 C from the Handbook of Chemistry and Physics, 52nd ed., p. D-198, and the freezing point predicted by Aspen Plus is about -11.8 C.
Ions do not have to be part of the system and any solid can be used instead of the water/ice system with the appropriate equation for the equilibrium constant. This method can also be used to predict solubilities (seeSolution 102340 for an example). Note that K-SALT and activity coefficient model parameters are not independent of each other.
The true component approach should be used in order to see the solid in the stream results.
The advantage of this method is that the solid equilibrium is evaluated for every stream and in every unit operation block.
RGibbs Method:
RGibbs can be used to calculate solid phase equilibrium (e.g. ice forming from water). To do this, give the solid a different component ID - this tricks the RGibbs block into thinking it is doing chemical equilibrium instead of phase equilibrium (SeeSolution 102346 for an example).
RGibbs can even handle solidSolutions (alloys). RGibbs can be used for much more complex applications that the Chemistry method discussed above.
NOTE: Phase equilibrium calculations between solids and liquids rely on accurate calculation of the Gibbs Free Energy for both phases. A small difference in Gibbs Free Energy calculations can cause Aspen Plus to predict a melting temperature far from reality. Phase equilibrium should be verified for the components and Physical Property models of interest before relying on the predictive capability of RGibbs.
RGibbs bases its prediction on pure component properties and should not be used to predict solubilities.
More information about using Solids in a RGibbs reactor can be found in the Aspen Plus help Using the Simulation Environment | Unit Operation Models
Keywords: Solid formation, solid phase, solid component
References: Manual | Reactors | RGibbs | Specifying RGibbs | Solids reference topic. |
Problem Statement: Will the latest versions of OLGA work with Aspen HYSYS V9 and V10? | Solution: A new OLGA link allows engineers to use the latest versions of OLGA with Aspen HYSYS. This link has been validated on OLGA 2014.2.0, OLGA 2015.3.1, and OLGA 2016.2.1, but should work with all versions of OLGA that use the OPC interface (versus the old TCP/IP interface).
There have been updates to the OLGA extension since its first availability (OLGA extension V7.0). In the attachments you will be able to find the following updates for the OLGA links:
V7.2
Link validated with OLGA 2016.2.1
Use of quotes in OLGA file will get read correctly
Addresses unit conversion for SCF/d/psi
Addresses mis-converted signs for OLGA sources used as HYSYS outlet
V7.3.3
TIME variable is required instead of optional. When upgrading from OLGA extension V7.2, TIME variable will now need to be exposed.
Allows integration of times larger than 500,000 second
Address an issue when a user attempts to change a HYSYS stream connection instead of deleting and re-adding it
Allows for selecting an OLGA boundary as an inlet or an OLGA source as an outlet
Allows drilling fluid to support WATERMUD type
Fix issue with detecting changes in drilling fluid on reloading an input file
All files included in the attached ZIP folders should just be replaced on the machine. It's recommended to make a backup of the original files in the C:\Program Files (x86)\AspenTech\Aspen HYSYS V10.0\Extensions\HYSYS OLGA Link folder, before following the next steps:
Replace the *.dll and *.edf files in this folder with the ones from the attachments.
Ensure that the files haven't been blocked by Windows by right-clicking each file and selecting Properties. On the General tab, if there is an Unblock button in the bottom area, press it and then press Apply. If there is no Unblock button, then there is no issue.
Test with OLGA again.
If there are issues with adding the extension to a flowsheet after replacing the files, it can be re-registered by following the steps below:
Launch HYSYS as an administrator by right-clicking aspenhysys.exe and selecting run as administrator.
On the Customize tab, click Register an Extension.
Navigate to the HYSYS OLGA Link folder listed above.
Select OLGAPipeline.dll or OLGAPipeline.edf.
There should be a dialog with information including a successful registration message.
Test with OLGA again.
Keywords: OLGA, link, HYSYS, Integration, Server, Extension
References: None |
Problem Statement: What is the meaning of these error messages:
Block: FSPLIT Model: FSPLIT
*** SEVERE ERROR
A NON-CONVENTIONAL PHASE IS PRESENT IN (0TH) KEY.
STD VOL FLOW FOR A NON-CONVENTIONAL PHASE IS NOT DEFINED.
Block: FSPLIT Model: FSPLIT
*** SEVERE ERROR
A MOLE-FLOW SPECIFICATION IS GIVEN BUT A NON-CONVENTIONAL SUBSTREAM
IS PRESENT AND NO KEY IS SPECIFIED.
SPECIFY MASS FLOW OR DEFINE A KEY | Solution: Non-conventional components do not have a defined molecular weight neither a standard volume, so the option of mole flowrate or stdvol flow are not allowed (even if the non-conventional components flow is zero).
There is no problem if you use mass flow.
TheSolution is to specify key components. See the attached example.
Keywords: FSPLIT, non-conventional
References: None |
Problem Statement: What is the meaning of these error messages:
Block: FSPLIT Model: FSPLIT
*** SEVERE ERROR
A NON-CONVENTIONAL PHASE IS PRESENT IN (0TH) KEY.
STD VOL FLOW FOR A NON-CONVENTIONAL PHASE IS NOT DEFINED.
Block: FSPLIT Model: FSPLIT
*** SEVERE ERROR
A MOLE-FLOW SPECIFICATION IS GIVEN BUT A NON-CONVENTIONAL SUBSTREAM
IS PRESENT AND NO KEY IS SPECIFIED.
SPECIFY MASS FLOW OR DEFINE A KEY | Solution: Non-conventional components do not have a defined molecular weight neither a standard volume, so the option of mole flowrate or stdvol flow are not allowed (even if the non-conventional components flow is zero).
There is no problem if you use mass flow.
TheSolution is to specify key components. See the attached example.
Keywords: FSPLIT, non-conventional
References: None |
Problem Statement: How to capture SQL Server Trace file? | Solution: The attached document details the process of creating a SQL Server Trace file.
Keywords: None
References: None |
Problem Statement: Editing of component rundown seems to affect previous cases.
1) Open MBO and use case BASE, and check component rundown for HCD
2) Add case TESTRD
3) Go to comp rundown and delete all rundown in HCD
4) Now move to case BASE
5) Stream HCD shows 0 flows for BASE case as well | Solution: The issue will be fixed in V14.
Wordaround: After switching to BASE case, close the model and open again, component rundown for HCD will be displayed corresponding to the case.
Keywords: case
component rundown
References: None |
Problem Statement: After installing PIMS V11, user gets following error message when starting Petroleum Scheduler:
Could not get the value for PtExecDirectory key. Please verify that this registry key is defined and the directory path is correct.
What does it mean and how to eliminate? | Solution: It could be observed when you use different PSC software versions, which is not supported configuration. QE does not test this combination.
The issue is that Platinum was sunset in V10 and when user installs PIMS V11, Platinum V8.8 gets uninstalled and APS gets the error. Please upgrade all PSC products to the same version.
Keywords: Platinum
registry
compatibility
References: None |
Problem Statement: and | Solution: Q: Is it possible to temper with the opening inventories in PARAOBJ?
A: Opening inventories are RHS values in the first material balance and they are constant values, divided by the period length. You could add a constant term to the VBALxxx1 equation and then vary it's value via PARAOBJ like below:
Q: Is it possible to temper with transfers in PARAOBJ?
A: Yes, transfers are modeled via T-variables and their values can be modified via PARAOBJ. Use Txxxyzmp, where xxx is the stream tag, y is the source tag, z is the destination tag, m is the mode identifier, p is the period identifier:
Q: Is it possible to report the incremental values of certain streams with DISPLAY? Or is it possible to find the incremental values in some other report while using PARAOBJ and not generating database for each case?
A: VBAL row can be added to table DISPLAY and marginal values (Pi, incremental value) will be provided in the created ParametricMarginalsxxx.xls file.
Q: Is it possible to report closing inventory change with Parametric Analysis?
A: Closing inventory change should be the difference between the INVT variable at the current period and previous period. You could print out these variables in table DISPLAY and then compare their values for each run (INVTxxxpy).
Keywords: parametric analysis
DISPLAY
PARAOBJ
References: None |
Problem Statement: Parametric Analysis doesn't give any result for XPROD pair in V11 and V12. It works fine in V10. What is the issue and how to fix? | Solution: Issue was introduced via the fix for VSTS 287605, which was fixed in the V11 gold and not patched back. The problem is caused by the processing of the row order. If you put the XPROD entries first in table PARAOBJ, then it works fine.
It is fixed in V12.1.
Keywords: XPROD
parametric analysis
References: None |
Problem Statement: Optimizer canβt complete the process and one of the following messages appear and βIncomplete | Solution: β warning appears. How could we figure out root cause of the problem?
Solution
The table below shows the mapping between MBO variables and its meaning. That will help you to understand from where the problem is coming (R β row, C β column)
Keywords: incomplete
References: None |
Problem Statement: Is the delay time hrs. field value hardcoded or we can change it? Every time I re-write it goes back to 3 hours.
The same question applies to the time difference between docking and hoses on actions. It seems like there is fixed 2 hours interval. Can we change it? | Solution: Dock Scheduling Settings at the top toolbar helps to configure default values for screens, model settings and other global settings specific to dock screens. Default docking time and connect hours could be configured under General tab.
Keywords: dock scheduling
delay
settings
References: None |
Problem Statement: Could you please explain how voyage event βrateβ is calculated in Dock Scheduling? | Solution: APS will use the default rate of Cargo material if it has one (on the Material Attributes dialog) as an event rate. Otherwise it calculates it as quantity * 2 (event duration is 12 hours).
Keywords: dock scheduling
cargo
voyage
rate
References: None |
Problem Statement: How to test unsolicited reads using Aspen Cim-IO_T_API | Solution: Create an interface and logical Device following standard practice.
Start the receiver process required for unsolicited test.
Launch windows command prompt with Administrator privileges and change the folder to βC:\Program Files (x86)\AspenTech\CIM-IO\codeβ
Execute cimio_t_receiver.exe with arguments as the logical device name. Example
cimio_t_receiver.exe CIMIO_1 I_OPC_1
Execute an unsolicited read using Cim-IO_T_API.
Launch the Cim-IO Test API tool without closing the above command prompt for receiver.
Select option βb-Test Cim-IO DECLAREβ
Enter logical device name. Example: CIMIO_1
Enter the default values for Unit Number as 1.
Enter number of tags as 1
Enter priority as 1
Enter timeout as 10000
Select Access Type as Asynchronous, I.E. 2
Enter receiver node as the computer name
Enter receiver service as the logical device name. Example: I_OPC_1
Enter frequency in tenths of a second as 1
Enter list id as any number. Example: 1
Enter tag name
Select data type for tag
Select device data type for tag
Select deadband as 1
Enter absolute deadband as 0
The Cim-IO_T_API will display the values coming from the receiver.
Keywords: None
References: None |
Problem Statement: When configuring a Cim-IO for OPC connection to a Onspec OPC server, It will connect but cannot read or write | Solution: Onspec support recommends setting the Identity of the Onspec COM application in DCOM Config to βThe Interactive Userβ.
Note; This is a nonstandard configuration and not recommended for any other OPC server.
Keywords: 0.0
The launching user
This user
Dcomcnfg
References: None |
Problem Statement: I have created a new logical device connection to an Aspen Cim-IO for OPC DA interface but the status in the Cim-IO IP21 Connection Manager remains red and its status remains Stopped.
The cimio_logical_devices.def file look correct but there are various messages in the CIMIO_MSG.LOG file:
CIMIO_DAL_CONNECT_CONNECT, Error connecting to service
CIMIO_MSG_CONN_SOCK_CREATE, Error creating an outbound socket
CIMIO_SOCK_OUT_CONN_FAIL, Error connecting to the server
WNT Error=10061 No connection could be made because the target machine actively refused it. | Solution: The cause of the problem is likely to be a missing key in the registry.
Then open the registry editor and locate the following key:
On 32-bit machine:
[HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\CIM-IO to OPC Interface]
On 64-bit machine:
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AspenTech\CIM-IO to OPC Interface]
You will probably discover that the CIMIOOPC key is missing which indicates an issue with the installation. Re Installing Cim-Io Interfaces should address this issue.
Keywords: RegQueryValue Failed
AsyncDlgp stopped
AsyncDlgp crash
References: None |
Problem Statement: Now you can track Water in Crude Tanks. Below is correct workflow (there is a typo in V12 Help documentation). | Solution: Create a CRUDE called _WTR
Create a crude tank called WATR
Create a Crude receipt that has _WTR as part of the crude composition.
Make sure that WATR exists in the control variable of the Gantt screen where you are creating a crude transfer event.
Create a crude transfer event that goes to WATR tank to remove water from the tank.
Please note WATR tank wonβt show vol change.
Keywords: water
crude tank
composition
References: None |
Problem Statement: We have two process streams (X and Y) which mix and can only do so up to a fixed maximum ratio. Ratio can be variable depending on the absolute flow of Stream A. Stream A is a different LP column which is not itself part of the ratio control row but is present somewhere else in the same LP submodel. When Stream A is at low rates, then this ratio increases and when Stream A is at higher rates then this ratio decreases; therefore, in summary the max ratio depends on the flow rate of Stream A.
Is it possible to create such structure in DR? | Solution: For AO this problem can be modeled easily using Non-linear Equations tool.
In DR we could try to resolve this issue involving PIMS-SI (in case youβre not planning to migrate to AO. Non-linear formulas are preferable still. Please note SI is not supported in AO).
For example, there is fixed ratio between two variables introduced β row Enewnew in ROWS (in your case it could be either G or L row) and want to vary that coefficient for DCN (now it is 5)
The following workflow could be used for control:
Define Excel Server
Define necessary independent variable as an Input (I defined SHCDBAS β random choice). Value will be populated from PIMS to Simulator
Necessary non-linear correlations can be defined in a separate spreadsheet. I used IF operator for testing
Result of calculation I would like to use as mentioned coefficient, so as an Output I specify the value should be used in PIMS for coefficient in the intersection of row Enewnew and SNHTDCN.
After run I can verify coefficient is changing dependent on SHCDBAS activity.
If you would like to see updated Excel, please activate βSave Excel Tables after PIMS-SI Calcβ¦β in Tools|Program Options
Keywords: PIMS-SI
ratio
variable
References: None |
Problem Statement: When connecting Aspen Verify to PIMS result database customer get the message βInvalid object name 'prcase' during SQL server connection. | Solution: There is a requirement for PIMS databases collation to be set as case-insensitive (that will be noted in the documentation starting from V14). Please note PIMS database script does not specify the collation attribute but takes the default collation that has been set in SQL Server. The default collation value is determined from the operating system locale. Look at the Server-Level Collations section in the link https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver15#Locale_Defn
In that scenario where the default collation value is a case-sensitive one, then the user will need to specify a case-insensitive collation value in the PIMS database script. If the user manually creates the PIMS database, then the collation value needs to be a case-insensitive one.
Keywords: SQL Server Management Tool
results
database
collation
References: None |
Problem Statement: Problem reports from customers indicated that the previous Aspen Cim-IO Store-and-Forward functionality was not very reliable, sometimes failing to complete, or forwarding very slowly β problems not reproducible in-house. Consequently Aspen Technology replaced the overly-complex Aspen Cim-IO store-and-forward architecture with a more reliable mechanism for the temporary buffering of data to be sent to the Aspen Cim-IO client after connectivity has been re-established. | Solution: Reason for Store-and-Forward
Under some circumstances, the Aspen Cim-IO server software is capable of collecting data from the device, but the data cannot be transferred to the Aspen Cim-IO client application, resulting in a loss of historical information. Aspen Cim-IO Store and Forward was designed to allow for recovery under the following situations:
The client that requested the data is shut down or otherwise unable to receive data, or
The connection between the server and client computers is lost.
When the server determines that one of these conditions applies, Aspen Cim-IO Store and Forward accumulates the time-stamped data as it is collected. Once it becomes possible to return to normal operations, Store and Forward sends the accumulated data to the client.
How previous Aspen Cim-IO S&F mechanism worked
The Store and Forward system consists of three processes on the Aspen Cim-IO server computer:
Aspen Cim-IO scanner process
Aspen Cim-IO store process
Aspen Cim-IO forward process
If Store and Forward is enabled, the Scanner functions as an intermediary between the client and the DLGP. If the client is unavailable or the link is broken, the Scanner keeps sending the clientβs data requests to the DLGP.
The Store process receives data from the DLGP. If the client is unavailable, the Store process writes the data to a store file until normal data transfer can be reestablished.
When normal data transfer is available after an interruption, the Forward module begins sending the contents of the store file to the client. This process continues until the file is emptied. The Store process continues writing data to the store file until the Forward process empties the store file. Afterwards, the Store process resumes communications with the client task.
Improved Aspen Cim-IO S&F mechanism
AspenTech re-implemented Store & Forward by exploiting the same kind of queuing mechanism that has been used for several decades by the Aspen InfoPlus.21 history event queue, a mechanism that has proven to be very reliable.
A key simplification is that the Aspen Cim-IO interface processes no longer have to coordinate switching between normal mode, store mode, and forwarding mode. Instead, if store and forward is enabled, all data passes through the queue before being sent to the Aspen Infoplus.21 system.
If the Aspen InfoPlus.21 is inaccessible, then new process data simply accumulates in the queue until one of the following conditions occurs.
Connectivity between the client and server is re-established. Data removed from the queue is then forwarded on to the client.
Data packets in the queue have been in the queue too long, violating a configurable maximum time limit. Aspen CIM-IO simply dequeues and discards data packets that are older than the specified limit.
The queue has grown too large, violating a configurable maximum size. The Aspen Cim-IO store and forward process dequeues and discards the oldest data packets as necessary.
Aspen Cim-IO S&F Configuration
Aspen Cim-IO store-and-forward parameters are specified for an interface using the Cim-IO Interface Manager.
Expand the tree control in the left hand pane until you find the Aspen Cim-IO interface instance of interest. Selecting the interface instance causes the middle pane to show information about the interface instance.
You will be able to:
Enable Aspen Cim-IO store-and-forward. Enabling store-and-forward causes the disk overflow queue to be created. It also causes the scanner process and the Aspen Cim-IO store and forward process to run.
Disable Aspen Cim-IO store-and-forward. Disabling store-and-forward stops the scanner process, stops the Cim-IO store and forward process, and deletes the store and forward disk overflow queue.
Set maximum time limit for data in queue. The Aspen Cim-IO store and forward process automatically discards messages that have been in the queue longer than the maximum time limit. The default is 72 hours.
Set size of the store and forward disk overflow queue file storage. In order to avoid disk fragmentation, improvement performance, and guarantee that desired disk space is available when needed, the disk overflow queue files should be created automatically in advance of need. If you specify the total disk overflow queue size, then Cim-IO will automatically create five disk overflow queue files, with each file sized at 20 percent of the specified total.
For example, if specified queue storage is 1 GB then five 200MB files would be created. If all five files become full (because of extended outage), then the data in the oldest file would be discarded; that is, no more data would be stored than can be accommodated by the pre-allocated storage.
If the user fails to specify a queue storage size, then one file is created when needed (ie., when primary and secondary memory sections are full), similar to the Aspen InfoPlus.21 history event queue.
Keywords: SF
Forward
Store
Cimio
Interface
References: None |
Problem Statement: At Calibrate mode, the RAMP variables will be deviated from ramp setpoints, regardless of Ramp Rate values or test group's mode (even the test group in Control mode). This document will introduce the reason and workaround. | Solution: This behavior is by design, RAMP variables will not be balanced in Calibrate mode (even if a test group is in Control mode). This is due to the concern that frequent actions were taken by ramp handle MVs will intervene with other variables still in Calibrate mode and degrade the data quality for modeling.
From V10.0 CP2 version, there is a new Calibrate mode CALIBOPT=1, the Calibrate Ratio determines the allowable deviation of each MV from its ideal target (when Calibrate Ratio is 0). This affects how much each MV can step away from the optimal value. For each MV, the allowable deviation from the optimal target is determined by the smaller of STCTLSTRATIO*(ULINDM-LLINDM) and STMVMAXSTEP.
Users can specify how much each variable can step away from its optimal target. For MV, clamp its Operator Limits or decrease STMVMAXSTEP can make it moving nearer to its optimal target. For CV, specify its handling MVs can get similar effect.
Keywords: Ramp, Calibrate, Setpoint
References: None |
Problem Statement: In an output file for an Aspen InfoPlus.21 external task what does this message mean?
Error : Invalid external task record - 481 | Solution: This means that the task in question does not have a corresponding record in the database. The missing record is needed in order for the task to function correctly. In the example screenshot above, the external task called TSK_IQ3 is producing the error. Notice that the task is NOT running (no check mark in the box to the left of the task name in the Defined Tasks list in the Aspen InfoPlus.21 Manager). The problem may be fixed either by creating a new ExternalTaskDef record with the same name as the task (in our example: TSK_IQ3) or by duplicating a similar record, like TSK_IQ1, and giving the duplicate the name of the task (in our example: TSK_IQ3).
Before:
After:
Now, with the new, corresponding ExternalTaskDef record TSK_IQ3 in place the TSK_IQ3 task can start and run successfully:
Keywords: None
References: None |
Problem Statement: What are the OEE calculation formulas being used in A1PE? | Solution: π΄π£πππππππππ‘π¦= (ππππππ‘πππ ππππ)/(ππβπππ’πππ ππππ)
πππππππππππ= (πππ‘ ππππππ‘πππ ππππ)/(ππππππ‘πππ ππππ)
ππ’ππππ‘π¦= (πΉπ’ππ πππππ’ππ‘πππ)/(πππ‘ ππππππ‘πππ ππππ)
ππΈπΈ=π΄π£πππππππππ‘π¦ βπππππππππππ βππ’ππππ‘π¦= (πΉπ’ππ πππππ’ππ‘πππ)/(ππβπππ’πππ ππππ)
Keywords: OEE calculation formulas
A1PE
References: None |
Problem Statement: What are the formulas be used in calculating the Upper Control Limit (UCL) and Lower Control Limit (LCL) in Ad-hoc SPC? | Solution: KNOWN MEAN
Range
Standard Deviation
Average
Upper Control Limit
UCLR = D2Γf0
UCLs = B6Γf0
UCLx = ΓΒΌ0 + AΓf0
Lower Control Limit
LCLR = D1Γf0
LCLs = B5Γf0
LCLx = ΓΒΌ0 - AΓf0
Γf0 = standard deviation ΓΒΌ0 = mean
If subgroup = 1, then A = 3, D2 = 3.685, D1 = 0. If subgroup >1, the values in the arrays below are used. To find the value being used for the variable find the value at the nth position (n = subgroup size).
A = [0.00, 2.121, 1.732, 1.500, 1.342, 1.225, 1.134, 1.061, 1.000, 0.949, 0.905, 0.866, 0.832, 0.802, 0.775, 0.750, 0.728, 0.707, 0.688, 0.671, 0.655, 0.640, 0.626, 0.612, 0.600]
D1 = [0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.204, 0.388, 0.547, 0.687, 0.811, 0.922, 1.025, 1.118, 1.203, 1.282, 1.356, 1.424, 1.487, 1.549, 1.605, 1.659, 1.710, 1.759, 1.806]
D2 = [0.000, 3.686, 4.358, 4.698, 4.918, 5.078, 5.204, 5.306, 5.393, 5.469, 5.535, 5.594, 5.647, 5.696, 5.741, 5.782, 5.820, 5.856, 5.891, 5.921, 5.951, 5.979, 6.006, 6.031, 6.056]
B5 = [0.000, 0.000, 0.000, 0.000, 0.000, 0.029, 0.113, 0.179, 0.232, 0.276, 0.313, 0.346, 0.374, 0.399, 0.421, 0.440, 0.458, 0.475, 0.490, 0.504, 0.516, 0.528, 0.539, 0.549, 0.559]
B6 = [0.000, 2.606, 2.276, 2.088, 1.964, 1.874, 1.806, 1.751, 1.707, 1.669, 1.637, 1.610, 1.585, 1.563, 1.544, 1.526, 1.511, 1.496, 1.483, 1.470, 1.459, 1.448, 1.438, 1.429, 1.420]
UNKNOWN MEAN
Range
Standard Deviation
Average
Upper Control Limit
UCLR = D4R
UCLs = B4s
UCLx = XΜ + A2R
Lower Control Limit
LCLR = D3R
LCLs = B3s
LCLx = XΜ - A2R
XΜ = mean R = range mean s = range mean
If subgroup = 1, then A2 = 2.6595, D4 = 3.267, D3 = 0. If subgroup >1, the values in the arrays below are used. To find the value being used for the variable find the value at the nth position (n = subgroup size).
A2 = [0.000, 1.880, 1.023, 0.729, 0.577, 0.483, 0.419, 0.373, 0.337, 0.308, 0.285, 0.266, 0.249, 0.235, 0.223, 0.212, 0.203, 0.194, 0.187, 0.180, 0.173, 0.167, 0.162, 0.157, 0.153]
D3 = [0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.076, 0.136, 0.184, 0.223, 0.256, 0.283, 0.307,
0.328, 0.347, 0.363, 0.378, 0.391, 0.403, 0.415, 0.425, 0.434, 0.443, 0.451, 0.459]
D4 = [0.000, 3.267, 2.574, 2.282, 2.114, 2.004, 1.924, 1.864, 1.816, 1.777, 1.744, 1.717, 1.693, 1.672, 1.653, 1.637, 1.622, 1.608, 1.597, 1.585, 1.575, 1.566, 1.557, 1.548, 1.541]
B3 = [0.000, 0.000, 0.000, 0.000, 0.000, 0.030, 0.118, 0.185, 0.239, 0.284, 0.321, 0.354, 0.382, 0.406, 0.428, 0.448, 0.466, 0.482, 0.497, 0.510, 0.523, 0.534, 0.545, 0.555, 0.565]
B4 = [0.000, 3.267, 2.568, 2.266, 2.089, 1.970, 1.882, 1.815, 1.761, 1.716, 1.679, 1.646, 1.618, 1.594, 1.572, 1.552, 1.534, 1.518, 1.503, 1.490, 1.477, 1.466, 1.455, 1.445, 1.435]
KNOWN MEAN and UNKNOWN MEAN refers to the Q_MEAN_ESTIMATE. KNOWN MEAN is when the Q_LIMIT_UPD_TRIGGER value is set to βRecalculate Periodicallyβ or βCalculate Once Using CIMQ_MEANβ. Otherwise itβs assumed to be UKNOWN MEAN.
Keywords: Upper Control Limit (UCL)
Lower Control Limit (LCL)
Ad-hoc SPC
References: None |
Problem Statement: How to create an ad-hoc Statistical Process Control plot on aspenONE Process Explorer (A1PE)? | Solution: Here is the procedure.
1. Insert an IP.21 tag to a basic trend plot on A1PE.
2. Check the checkbox of the tag on the tag legend area, then click the Edit button.
3. On the Trend Type, select Ad Hoc Spc instead of Actuals.
4. Define the Chart Type and Subgroup Size.
5. Define the method to calculate the Control Limits.
6. Click βSaveβ, and the basic trend plot will display the real-time Statistical Process Control information.
Keywords: ad-hoc Statistical Process Control
Control Limits
References: None |
Problem Statement: After applying the CP1 patch for V11 in the Online server, the controller fails to deploy. When deploying, an error message appears saying to check various items but all of them look correct. | Solution: The problem occurs when there are V11 EPβs applied in the Online server and CP1 is applied above. On the other hand, this does not happen if CP1 was applied on a V11 installation with no EPβs.
The installer fails to update the ACESAppTemplate.xml file to the CP1 version, leaving the EP version. Therefore, the engine recognizes the version of the ACESAppTemplate does not match with CP1, and outputs an error message when trying to deploy the controller.
The workaround is to replace the attached ACESAppTemplate.xml. Here is the procedure:
Close all DMC3 Builder windows in the Online server (DMC).
Stop all the RTE controllers.
Stop the RTE service
Replace the attached ACESAppTemplate.xml file in the following locations.
C:\Program Files (x86)\AspenTech\APC\V11\Builder
C:\Program Files (x86)\AspenTech\RTE\V11\APC.
If you get an error βaccess deniedβ, make sure to log in with an administrator account, and copy the xlm file into the server and then to the locations mentioned.
Restart the RTE service. The application should deploy successfully.
Fixed in
Emergency patch 1 for CP1
Keywords: β¦Aspen DMC3 Builder, CP1, Online Server, VSTS 563037
References: None |
Problem Statement: How to convert Aspen Plus input file to backup file? | Solution: This article will guide you for converting Aspen Plus Input file to a Backup file or for opening Aspen Plus Input file:
Navigate to Start -> Aspen Plus folder and select Customize Aspen Plus V10 icon (this may refer to your version)
Once the command prompt window opens, browse to the input file folder by using the DOS commands βcdβ and βcd..β
Example cd Foldername to select the folder and cd.. to go back to previous folder
After locating to the input file folder, type aspen <space> nameoffile.inp <space> /mmbackup
The Command prompt window will run and create the required summary file, history and backup file in the parent folder location.
Keywords: Input file, Backup file, Summary file, Customize Aspen Plus, DOS, command prompt
References: None |
Problem Statement: How to create a new stream in Explorer? | Solution: 1. In the Explorer application, create a new folder named as βStreamsβ.
2. In this new folder, create Primary Stream objects in the workspace, by using the Object | Create command.
3. Do the same step 2 above for creating additional streams.
Keywords: None
References: None |
Problem Statement: No data is transferred from hysys to excel via workbook option | Solution: When user opens MS excel application as standalone and encounters something like product activation failed error below, then use needs to input product key for MS excel so that it can allow user to export those data from workbook to excel.
Keywords: None
References: None |
Problem Statement: Why can't some assays be deleted from Petroleum Assays? | Solution: Some assays can't be deleted from Petroleum Assays for the following reasons:
The assay is attached to a stream.
A stream was defined as an assay. To delete the Assay, it will be necessary to delete the original stream.
To identify which streams have assays attached, follow the steps below:
1. Click the Editor button on the Conditional Formatting section of the Flowsheet/Modify ribbon. The Conditional Formatting window will be opened.
2. Then, select the drop-down menu and click <Add New> for a new conditional formatting option and select Petroleum Assay Type.
3. Customize the color scheme to easily identify the streams that have attached assays.
To detach an assay from a stream go to the Petroleum Assay form in the Worksheet tab and click the Detach button.
Keywords: Assay, Delete, Petroleum Assay Type.
References: None |
Problem Statement: What are the A1PE server speed benchmarks? | Solution: Latency should ideally be <250-300 ms
Bandwidth will depend on the number of users and amount of data utilization done (number of tags, comments, data points requested). For example, using a rough baseline of 10 tags with a lot of data points at 2 hours (like tag ATCAI), the bandwidth is ~500kb /sec * user to ~1mb/sec*user (if there are lots of comments/alarms/text within a given window).
Keywords: speed benchmarks
A1PE
References: None |
Problem Statement: How to generate a report that includes composition and physical properties, using Print Manger? | Solution: By default, printing out of compositions and physical properties, is disable.
1. To enable these two variables, go to File and select Properties.
2. Check the option 'Save Phase Properties' on the General tab. Click OK.
3. Click Run and solve the case
4. Select the File menu and click Print.
5. From the print menu, you can now select the Composition and Physical properties.
Keywords: None
References: None |
Problem Statement: When designing basic phases in a BPL all the variables you define are internal to the phase (their values at execution cannot be accessed from another basic phase) except for Recipe Global variables. These variables once defined in any basic phase can be later used in another basic phase, as parameters in the recipe (RPL) design or inside PFC condition definitions.
The format of Recipe Global variables is as follows:
$$<name> where name is the variable name
If you include any arithmetic symbol in the name of the global variable (for example, $$IP-21), the variable cannot be used without causing syntax errors on compilation or execution. | Solution: The symbols +, -, / and * are not allowed in the global variable names. Any access (read/write) to global variables is in the script code and these symbols are reserved tokens for arithmetic operation in the script. With these symbols in the variable name, and then included in the script code, the compiler is triggered to throw a syntax error in the compiling process.
Consequently, the use of such symbols in the global variable names is not supported.
Keywords: APEM AeBRS
minus
plus
multiply
divide
shared variable
References: None |
Problem Statement: Recent versions of aspenONE Process Explorer (A1PE) feature several performance improvements that address the slow performance issue when communicating with remote Aspen InfoPlus.21 (IP.21) systems across a wide area network (WAN) instead of accessing an IP.21 system that is on the same local area network (LAN).
This article describes the background to this issue and how to configure the A1PE web server to take advantage of performance improvements in recent versions of A1PE. | Solution: The underlying cause of poor A1PE performance when communicating with a remote Aspen InfoPlus.21 (IP.21) system across a wide area network (WAN) instead of accessing an IP.21 system that is on the local area network (LAN) is explained below.
A1PE obtains process data from an IIS-based Process Data REST Service. The Process Data REST Service obtains the requested process data from the specified data source, which may be on a different computer. The Process Data REST Service establishes a connection with the remote IP.21 system when the user first opens a graphic or trend plot to view process data from that IP.21 system. This initial establishment of a connection is usually fast (seconds) if the IP.21 server is on the same LAN as the Process Data REST Serviceβ¦but it may be quite slow (minutes) if the IP.21 server is somewhere more remote. Once a connection has been established, then subsequent graphic or trend plot invocations are fast.
The Process Data REST Service makes calls to a local COM-based component (dll) - the IP.21 Data Server. When establishing an initial connection to an IP.21 system, the IP.21 Data Server component makes many calls to a lower-level IP.21 Application Programming Interface (API). These lower-level calls take much longer across a WAN because of the poorer speed and latency characteristics of a WAN.
The improvement provided here exploits an already existing Aspen Process Data Service (a Windows service) that already resides on the InfoPlus.21 system for use by the Process Data Excel Add-ins. If configured to do so, the Process Data REST Service (a web service) will communicate with the Process Data Service (a Windows service) that resides on the same computer as IP.21. The Process Data REST Service makes relatively few, high-level calls to the Process Data Service. The Process Data Service invokes methods in a local IP.21 Data Server. When establishing an initial connect, the IP.21 Data Server still makes many calls to the lower-level IP.21 APIβ¦but in this scenario, the calls execute much faster because they are not occurring across a WAN.
A1PE users who need to access remote Infoplus.21 systems can enable this performance improvement by making both an ADSA configuration change and an AtProcessDataREST.config change.
The ADSA change required for this WAN performance improvement is to add βAspen Process Data Serviceβ to the ADSA data source for the IP.21 system:
To obtain this WAN performance improvement, you would also need to edit the AtProcesssDataREST.config file, setting the UseProcessDataService parameter to True.
File: C:\inetpub\wwwroot\AspenTech\ProcessData\AtProcessDataREST.config
<REST>β¦<UseProcessDataService>True</UseProcessDataService>β¦</REST>
So, in summary, the Process Data Service is used:
β’ Only if the entry in configuration file is set to True
β’ Only for data sources configured with the Aspen Process Data (IP.21) service
β’ Only for data sources configured with the Aspen Process Data Service
β’ Only valid for the REST systems that talk to InfoPlus.21 servers of the same or newer version
β’ Only if the IP.21 system is on another machine (hostname in Process Data Service is NOT the same as the Web Server)
Users who do not make these changes would not benefit from the performance enhancement that applies when the InfoPlus.21 system is remote.
Keywords: Time-out
PD REST
admin
index
CQ00607343
References: None |
Problem Statement: The use of public cloud technology for traditionally βon premiseβ | Solution: s has grown rapidly during the past decade. As a result, AspenTech is engaging more frequently with you, our customers, in deploying various technologies in cloud environments.
To support our customers in this area, AspenTech is testing and documenting best practices that will help you get started in Microsoft Azure, Amzazon Web Services (AWS), and also provide broader guidance on how to approach this new deployment architecture in any on-premise or cloud-vendorSolution.
Below is a simplified architecture diagram that shows a typical deployment. The two key components are the aspenONE Process Explorer and Aspen InfoPlus.21 products that are deployed in separate machines on Azureβs and AWS's cloud infrastructure. Users would then connect to these products from an onβpremise location β such as an operations facility or corporate office.
Solution
The attached document, entitled βAspenTech - Cloud Deployment Guide for MESβ contains advice on both the Implementation Details and the Troubleshooting of issues that may arise in the course of implementing a cloud-based MESSolution.
Please download and review this document to learn about the Best Practices to follow when deploying Aspen MES applications in the cloud environment.
Keywords: Amazon AWS
Microsoft Azure
Cloud
Manufacturing Execution Systems
References: None |
Problem Statement: How to bypass Microsoft SQL database server installation pre-requisite for Aspen ONE Engineering installation? | Solution: Following registry entry needs to be created before installing Aspen ONE Engineering products for hosting a local DB or for bypassing SQL installation pre-requisite.
For example., If Centralized SQL database server is already in place and for hosting SQLLocalDB on a local machine, following registry entry needs to be performed for the installation:
Please change the number at the end of the Registry String for the version that is going to be installed.
\AspenTech\APED\xx.x
36.0 - v10
37.0 - v11
38.0 - v12
For 64-bit machines:
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AspenTech\APED\xx.x]
ASPENDB=dword:00000002
For 32-bit machines:
[HKEY_LOCAL_MACHINE\SOFTWARE\AspenTech\APED\xx.x]
ASPENDB=dword:00000002
Keywords: SQL, Bypass SQL, SQLLocalDB, Registry
References: None |
Problem Statement: The below error message appears when launching Aspen ProMV:
<class 'UnboundLocalError'>: local variable 'aw' referenced before assignmentAspen ProMV Version: 37.1 Python: 3.5.2 Platform: Windows-10-10.0.18362 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] | Solution: This issue may be caused when another AspenTech product on an older version exists on the same machine as your Aspen ProMV. Please follow the steps below to resolve it.
Check to see if there are any AspenTech Products on older versions than Aspen ProMV installed in the machine (for example, if you have Aspen ProMV V12 check to see if you have any AspenTech products on V11 or V10)
If you have AspenTech products on older versions, uninstall the product with the older version
Confirm again that all your AspenTech products are now on the same version as your ProMV
Open Notepad and type the command subst W: C:\ and save the text file using any filename of your choice
Edit the extension on the text filename from .txt to .bat (click yes when you get a warning about changing filename extensions)
Move the new .bat file to the C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup folder (or equivalent startup folder)
You may need to reboot your computer
Launch Aspen ProMV
If these steps are properly covered but the issue still persists, please contact AspenTech Support.
Keywords: Error, Open, Python, 3.5.2, UnboundLocalError
References: None |
Problem Statement: In the first revision of the Aspen Batch APC Installation Guide V12, it shows this URL for aspenONE Process Explorer:
http://<ServerName>/aspenbatchapconline/
or
http://localhost/aspenbatchapconline/
However, trying to access this page results in HTTP Error 404. | Solution: The correct URL is http://<localhost>/aspenbatchapc or http://<ServerName>/aspenbatchapc
The documentation will be updated with the correct URL in the Installation Guide found here: https://esupport.aspentech.com/S_Article?id=000097717
Keywords: batch, apc, url, http, error, 404, a1pe, aspenone, process, explorer
References: s: Defect 619670 |
Problem Statement: When trying to use the Inspector functionality on the Production Control Web Server, it is possible to get the next error when it is applied to large controllers:
Error:
Query timeout (Errror: -2147217871)
Query:
AW_Q_APPINFO | Solution: Apply the next procedure:
In the Web server, go to C:\inetpub\wwwroot\aspentech\ACOView and create a copy AWview_buffer.asp file, make sure to change the name of the original file to one you can identify (AWview_buffer_old.asp for example). Open AWview_buffer.asp file a text editor such as Notepad.
Using the Find functionality, locate the line res = Query(qstr1,ndx, false);
Replace the line with the next instructions:
if (Server.ScriptTimeout < 240)
{
Server.ScriptTimeout = 240; //CQ00454217 - allow at least a 4 minute script timeout
}
res = Query(qstr1,ndx, false,-1,240);
Save the file and refresh the Web page.
This will set the timeout of the Inspector functionality to at least 4 minutes.
Keywords: PCWS, Inspector, timeout
References: None |
Problem Statement: This article aims to explain a known defect in DMC3 Builder's prediction plots display of the data range dates and times when multiple datasets are selected, so that users are aware of the issue when viewing prediction results. The target fix for this issue is V14 (Defect 600435). | Solution: When running predictions from a case that has multiple datasets added with different dates, the original dates of the datasets are not being preserved in the predictions graph data range but instead, they are tied together as segments of a continuous dataset. So for example, two datasets were added to a case with the following ranges:
June-12-2014 to July-7-2014
July-14-2014 to August-20-2014
In DMC3 Builder > Case Editor > Variables > Dataset section:
After running identification and viewing the Prediction results, one would expect to see the range of both datasets from Jun 12 to Aug 20. However, the data range shows from June 12 to August 7, as seen below:
This may seem like the full datasets are not being used for the prediction results, however, this is just a display issue and both datasets are still being used. The reason for this behavior is because the dates from the second dataset are not being preserved accurately. Instead of ranging from June-12 to July-7 (first dataset) and then July-14 to August-20 (second dataset), the dates are combined together as if it were one continuous dataset. Since the Jun-12 dataset was added first in the list in the Case > Variables section, that is the date used as the starting point for the data range and the second dataset is added as a continuation of it.
As a workaround, the user can create a copy of the case for just viewing predictions and use one dataset at a time in the case to view the prediction results.
Keywords: prediction, plot, graph, data, range, date
References: s: Defect 600435, Target Fix is V14 |
Problem Statement: How to lock down PCWS manage and data entries for IQ, DMCPlus and RTE. This would mean make a certain web page read only for some applications. As shown below with RTE applications. | Solution: The following are the DataService config files in the Online server in which one can lock down data entries:
Aspen IQ ONLINEAPPS/cfg/AspenTech.ACP.IQ.DataService.config
DMCplus ONLINEAPPS/cfg/AspenTech.ACP.DMCplus.DataService.config
Nonlinear ONLINEAPPS/cfg/AspenTech.ACP.Nonlinear.DataService.config
RTE RTECONFIG/AspenTech.ACP.RTE.DataService.config
Where
ONLINEAPPS C:\ProgramData\AspenTech\APC\Online
RTECONFIG C:\ProgramData\AspenTech\RTE\Vx.x\Config
For example, the AspenTech.ACP.DMCplus.DataService.config has the following structure
<aspentech.acp.dataService>
<security>
<allowedClientConnections>
<add hostNameOrAddress=* allowGet=True allowManage=True allowFileManager=True allowWebModeling=False />
</allowedClientConnections>
</security>
<fileManager>
<publicFolders diskFullNoticeFilePath= >
<add folderName=APCOnlineApp actualPath=%PROGRAMDATA%/AspenTech/APC/Online/app/ purgeIntervalInHours=0 minDiskFreeBytes=102400 />
</publicFolders>
</fileManager>
</aspentech.acp.dataService>
To lock down manage and data entries for a certain web server, add the following line. Where ReadOnlyPCWSHostName is the name or IP Address of the web server.
<aspentech.acp.dataService>
<security>
<allowedClientConnections>
<add hostNameOrAddress=ReadOnlyPCWSHostName allowGet=True allowManage=False allowFileManager=False allowWebModeling=False />
<add hostNameOrAddress=* allowGet=True allowManage=True allowFileManager=True allowWebModeling=False />
</allowedClientConnections>
</security>
<fileManager>
<publicFolders diskFullNoticeFilePath= >
<add folderName=APCOnlineApp actualPath=%PROGRAMDATA%/AspenTech/APC/Online/app/ purgeIntervalInHours=0 minDiskFreeBytes=102400 />
</publicFolders>
</fileManager>
</aspentech.acp.dataService>
In the case of RTE the AspenTech.ACP.RTE.DataService.config, the same line is added:
<aspentech.acp.dataService>
<security>
<allowedClientConnections>
<add hostNameOrAddress=APC allowGet=True allowManage=False allowFileManager=False allowWebModeling=False />
<add hostNameOrAddress=* allowGet=True allowManage=True allowFileManager=True allowWebModeling=False />
</allowedClientConnections>
</security>
<fileManager>
<publicFolders diskFullNoticeFilePath= >
<add folderName=RTECLOUDS actualPath=%RTECLOUDS% purgeIntervalInHours=48 minDiskFreeBytes=102400 />
</publicFolders>
</fileManager>
</aspentech.acp.dataService>
One can add multiple Web server hosts as additional lines if required.
After the change has been applied, restart the corresponding data service in the Online server
Aspen APC DMCplus Data Service
Aspen APC Inferential Qualities Data Service
AspenTech Production Control RTE Service
In the case of DMCPlus and IQ, the running application will not be affected by the restart. Although, Restarting the RTE service will affect the Running applications, collect and related, be careful.
Keywords: β¦ lock down, read Only, AspenTech.ACP.DMCplus.DataService.config, PCWS
References: None |
Problem Statement: How to see the results for the assay D2887 in the Workbook when those are empty? | Solution: For seeing the results of D2887 assays do as follows:
1. Double click on the empty value.
2. Go to Results.
3. Change basis to Mass Frac.
4. Repeat the previous steps for all the D2887 Assays columns.
For oil characterization, the ASTM 2887 is a simulated distillation curve generated from chromatographic data, the resulting curve is reported on a weight percent basis. Thus, the D2887 assays are always on a mass basis, therefore there are no means to get those results in a volume or mole basis
For more information about the ASTM 2887 characterization, please refer to the KB article below.
Why in the Petroleum Assay Stream Analysis, the TBP for ASTM D2887 only shows results in mass percent basis?
Keywords: ASTM D2887, assay, empty
References: None |
Problem Statement: What to do if the Column Internals folder does not appear? | Solution: In some cases, *.apwz files do not have the Column Internals specified beforehand, which causes the Column Internals folder to not be visible in the left pane menu.
In order to see this folder follow the next steps:
1. Save the file in *.bkp format and reopen it.
2. Aspen Plus will pop-up window of Column sizing /Rating detected, select Upgrade Column Hydraulics.
3. The Column Internals folder will be visible in the left pane menu.
Keywords: Column Internals, Column Hydraulics, activate Column Internals folder, not visible, does not appear, deactivated.
References: None |
Problem Statement: In versions prior to V12.1, the IQ Help File is not clear on the behavior of the Cusum Update Method. This article aims to elaborate on this topic.
Currently, the description that needs further explanation under the Aspen IQ Users Guide > Aspen IQ Modules > Lab Update (LBU) Module page > section on the CUSUM Update Method is as follows:
When the accumulated error is greater than the limit, then the number of lab values that were required to provide the necessary total error for an update is used as the number of lab values in the raw bias update calculation.
An update will occur when the number of samples specified in LBUNUMVALS is reached, regardless of what the CUSUM counter says. This approach allows you to be assured of a lab update after a certain number of lab samples, but if an update is required before that, then it will be performed.
Note: In version V8.8 and later, no automatic bias update occurs when the number of samples reaches LBUNUMVALS. Additionally, there is no longer a requirement that LBUNUMVALS samples must be entered before the first bias update occurs. | Solution: The above description in the Help File can be clarified as follows:
When the accumulated error is greater than the LBUCUSUMLIM threshold, a lab bias update will occur. The number of lab samples that will be used in this raw lab bias update calculation is determined as the minimum of either:
The number of lab samples that were required to provide the necessary total error that met the LBUCUSUMLIM
The number of lab samples equal to LBUNUMVALS
An update will occur as long as the LBUCUSUMLIM threshold is met, even if there are fewer than LBUNUMVALS good samples available.
If LBUCUSUMLIM is exceeded using number of samples less than LBUNUMVALS, then number of samples required to meet the threshold will be used for lab bias update calculation.
If LBUCUSUMLIM is exceeded using number of samples greater than LBUNUMVALS, then LBUNUMVALS will be the number of samples used in the raw bias update calculation.
Note: In version V8.8 and later, no automatic bias update occurs when the number of samples reaches LBUNUMVALS because the bias update requires that the accumulated error is greater than or equal to LBUCUSUMLIM first. Additionally, there is no longer a requirement that LBUNUMVALS samples must be entered before the first bias update occurs.
Keywords: cusum, iq, help, threshold, limit, LBUCUSUMLIM, LBUNUMVALS
References: : Defect 581445, Target fix V12.1 |
Problem Statement: How to remove or delete existing model variables from Variable β Tag mapping window? | Solution: If in case, user wanted to delete or remove an existing model variable from variable-tag mapping window,
Then, they can be removed only from the βModels | βProject linked modelβ | Specifications | Mapping variables windowβ
Keywords: Delete mapped variable, Variable-Tag Mapping, Mapping Variables, Aspen ONLINE, AOL
References: None |
Problem Statement: How can I find the Engineering Drawings included on for my Aspen Capital Cost Estimator project? | Solution: Aspen Capital Cost Estimator (ACCE) can estimate a quantity of Engineering Drawings, which contribute to the Engineering cost of the project.
The available drawing types correspond to the Basic Engineering and Detail Engineering phases of the project. ACCE reports these results in separate tables, each one corresponding to each of these phases.
The calculated Engineering Drawings can only be seen on the CCP report, a drawing count table will be generated for each contractor that will perform Basic or Detail Engineering. Search for the label βDRAWING CATEGORYβ to find them easily
BASIC ENGINEERING DRAWINGS:
DETAIL ENGINEERING DRAWINGS:
Note you will see one table for Basic Engineering drawings and a second table for Detail Engineering. These tables are not immediately next to each other.
Keywords: Drawings, Sketches, Diagram, Indirects, Count
References: None |
Problem Statement: How do I create and modify an Engineering Workforce? | Solution: Aspen Capital Cost Estimator (ACCE) allows to customize the Engineering Workforces (EWF) that will participate in the project, allowing to create additional teams, and to edit the cost and hours of each engineering discipline.
Engineering workforces can be customized from the Engineering | By Phase and Engineering | By Discipline forms, both can be used simultaneously to provide all the details for each Engineering team.
Using the By Phase form:
Allows for general adjustments to engineering teams. Control data for All Engineering Phases or for a single Phase (i.e. Basic Engineering)
Specify an Engineering Workforce number. A blank value is allowed (Default EWF). Multiple columns can have the same number, each column representing a different adjustment to the same EWF.
Specify an Engineering Phase (i.e. Basic Engineering). If multiple phases will be modified, each should be selected on a different column.
Enter your own engineering cost and hours to overwrite the system calculations OR apply percent adjustments to the calculated hours and wage rates. (Default Wage rates shown in Icarus
Keywords: Engineering, Crews, Cost, Proratables, Indirect, Wage Rates,
References: , Chapter 31).
If desired, enter additional changes such as payroll burden, indirects, and expenses for the selected phase (lump sum or percentages available)
Add more columns to the form to define more Phases
The order of the Columns matters:
Do not use the β * - All Phasesβ option after defining a single phase (i.e. Basic Engineering) in the same EWF, since the All Phases data will overwrite the adjustments done for Basic Engineering.
Define All Phases before the single phase (i.e. Basic Engineering), this way the global adjustment is applied to all phases and only the Basic Engineering will receive the second modification.
Using the By Discipline form:
Allows for detailed adjustments to engineering. Control data for individual Disciplines (i.e. Piping Design) thatβs part of an Engineering Phase (i.e. Basic Engineering)
Specify an Engineering Workforce number. A blank value is allowed (Default EWF). Multiple columns can have the same number, each column representing a different adjustment to the same EWF.
Specify an Engineering Phase (i.e. Basic Engineering). Same Phase can be added to multiple columns if multiple disciplines will be modified.
Specify an Engineering Discipline (i.e. Piping Engineering).
Enter your own engineering wage rate and hours to overwrite the system calculations OR apply percent adjustments to the calculated hours
Add more columns to the form to define more adjustments.
The order of the Columns matters:
Do not use the β ** - All Disciplinesβ option after defining a single discipline (i.e. Piping Design) in the same EWF, since the All Disciplines data will overwrite the adjustments done for Piping Design.
Define All Disciplines before the single discipline (i.e. Piping Design), this way the global adjustment is applied to all disciplines and only the Piping Design will receive the second modification. This is showcased on the screenshot above.
Note in order to use a different EWF other than the default, the workforce should be linked to a contractor. According to the Contractorβs task, their information will be used. |
Problem Statement: How can I control the Rental Equipment on my Aspen Capital Cost Estimator project? | Solution: Aspen Capital Cost Estimator (ACCE) provides a field to control the Rental Equipment (ER) to be included on the estimate.
In order to fully control all the ER it is important to first review the results generated by ACCEβs system, since the Equipment Rental form will allow to:
Add equipment, if itβs missing and should be included
Delete equipment, if it should not be included on the estimate
Change the required days or fees for an equipment.
For details on how to find the System Generated results please review Article 76965
Once you have identified the Equipment that will be modified, navigate to the Equipment Rental form in the Project Basis View tab. For each modification you would like to perform, add a new column to the form.
To add an equipment rental:
Enter a description (i.e. Include Ambulance)
Select the Contractor(s) that will assume this cost
Enter the Equipment Rental Number β this is a value unique to each rental service that can be estimated in ACCE. Review the list of Numbers on the Icarus
Keywords: Rental, Equipment, Remove, Include, Change, Fees, Duration, Adjust, Overwrite
References: guide, chapter 32 Construction Equipment (i.e. 21 for Ambulance)
Select ADD as the Rental action code
Enter the days required
Run the estimate to review the new results
Note: The ADD code does not allow to modify the Rental rate, use the CHANGE action if you wish to change this.
To modify an equipment rental:
Enter a description (i.e. Modify Crawler Crane 20 Tons)
Select the Contractor(s) whose cost will be modified
Enter the Equipment Rental Number β this is a value unique to each rental service. Review the list of Numbers on the Icarus Reference guide, chapter 32 Construction Equipment. Alternatively, if the equipment was already estimated by the system, use an Equipment Rental report to find this number (i.e. 202 for Crawler Crane β 20 Tons)
Select CHANGE as the Rental action code
Enter the days required and/or your own Monthly rental rate. If any of these fields is left empty, ACCE will still use the respective system values
Run the estimate to review the new results
Note: If the rental service was not included on the system estimate, do not leave the required days cell empty.
To remove an equipment rental
Enter a description (i.e. Remove Crawler Crane 20 Tons)
Select the Contractor(s) from which the service should be removed
Enter the Equipment Rental Number β this is a value unique to each rental service. Review the list of Numbers on the Icarus Reference guide, chapter 32 Construction Equipment. Alternatively, if the equipment was already estimated by the system, use an Equipment Rental report to find this number (i.e. 202 for Crawler Crane β 20 Tons)
Select DELETE as the Rental action code
Run the estimate to review the new results |
Problem Statement: This article assumes there are no apparent problems with search in aspenONE Process Explorer (A1PE) when Domain Security is disabled.
After having used aspenONE Credentials tool to configure aspenONE search so that Domain Security is enabled, you may see multiple signs of poor performance when using search within A1PE, likely timeout messages are recorded in log files (see ProcessDataRestService logs in C:\ProgramData\AspenTech\DiagnosticLogs\ProcessData folder -and- various logs in \<Tomcat>\logs folder). You may also see status errors displayed when working with the aspenONE Process Explorer Admin page.
Errors reported may include some, if not all (at some time) of the following:
Unable to validate server response. HTTP Code: 415 Description: The initialization procedure that runs the first time the Search Engine connects to its configured domain has not completed. Please try again later.
Unable to validate server response. HTTP Code: 504 Description: Search Security experienced a connection timeout when connecting to your domain. Serverβs aspenONE Credentials tool can be used to increase connection timeout.
Because of the timeout errors, you should at least see if increasing the Connection and Read timeouts in aspenONE Credentials tool resolves the problem (after having restarted Apache Tomcat). If this does not resolve all issues adequately then continue with the | Solution: described below.
Background: When Domain Security is enabled, the Search Engine will tailor the search results to only the items to which the user has read access based on the userβs group membership (group used in AFW Local Security to define Role holding Read Privilege). To improve general performance, aspenONE caches all the domain groups and users so it doesnβt have to check in with the domain on each search request. This initialization should complete quickly unless it either cannot access the LDAP service or the number of domain group and users configured in the Active Directory Domain is very large (perhaps even measured in the hundreds of thousands).
At this point you should at least verify the credentials specified in the aspenONE Credentials tool are valid.
Assuming that indeed the problem is likely caused by a large number of user objects in your Active Directory Domain, thisSolution will describe how you might resolve these errors whilst keeping Domain Security enabled.Solution
For V11 and V10.1:
Apache Tomcat has been installed as a 64-bit service. For 64-bit Apache Tomcat, open the TomcatNw.exe application in %PROGRAMFILES%\Common Files\AspenTech Shared\<Tomcat>\bin folder (where N = Tomcat version number) to increase the Initial and Maximum Java memory pool to 2048 and 4096 respectively (decide on values depending on actual system memory availability):
For V12 and V14:
For versions starting with V12.0, you should follow the instructions in the following KB: https://esupport.aspentech.com/S_Article?id=000101489 βMonitoring and Tuning Solr Memory Consumptionβ for updating memory allocation.
Perhaps having made these changes, even at this point you will benefit from sufficient improved performance and consequent reduction of error messages.
You can modify the behavior of aspenONE so it only caches a subset of the domain groups and users (instead of ALL of them which is the default until at least V14.0). To do this follow appropriate sets of the following instructions:
A. If aspenONE Process Explorer web server is V12.0+ then solr is installed in its own part of the file system: %PROGRAMFILES%\Common Files\AspenTech Shared\solr-N.N.N (solr-8.2.0 for aspenONE V12.0 to V14.0 but likely will be different in due course):
Stop SolrWindowsService in services.msc
Stop Apache Tomcat service in services.msc
Delete the existing Solr Security Database by deleting all files and folders found under \<solr>\server\solr\aspenTech\conf\derby\
Open a text editor (Run as Administrator) and select file: \<solr>\server\solr\aspenTech\conf\AspenSearchSolrSecurity.xml
There are 3 new Boolean switches available since V10.1; in each case setting the value to true will enabled the feature.
OmitDomainUsersGroup β Prevents connection to Domain Users Group at level defined by ldapBase. This connection is required for Domain Security.
UseSeededDataOnly β Build security cache from Process Data response instead of building full Active Directory cache.
SkipPrefetch β Skip initial load from Active Directory the first time SOLR starts.
To enable all these features, add the following lines (carefully inserted just before the closure of the lst section) and then save AspenSearchSolrSecurity.xml file:
<bool name=omitdomainusersgroup>true</bool>
<bool name=useseededdataonly>true</bool>
<bool name=skipprefetch>true</bool>
At this point, the Tomcat log files are obsolete given the configuration changes made so you may like to consider moving or deleting the contents of the log folder: %PROGRAMFILES%\Common Files\AspenTech Shared\<Tomcat>\logs\
Restart Apache Tomcat service in services.msc
Wait for \<Tomcat>\logs\AspenSchedulerStartUp.log to appear
Also wait for \<Tomcat>\logs\SolrFacadeStarup.log to appear
Restart SolrWindowsService in services.msc
Go to RESCAN section below
B. If aspenONE Process Explorer web server is V10.1 to V11.0 then solr is installed within <Tomcat> part of the file system:
Stop Apache Tomcat service in services.msc (this is very important and Tomcat configuration files could become corrupted if you continue making these changes whilst Tomcat is running).
Delete the existing Solr Security Database by deleting all files and folders found under \<Tomcat>\appdata\solr\derby\ - Note, there are two derby folders in the appdata area of the file system: DO NOT MODIFY ANY OTHER THAN THE ONE FOUND UNDER SOLR FOLDER.
Open a text editor (Run as Administrator) and select file: \<Tomcat>\appdata\solr\collection1\conf\AspenSearchSolrSecurity.xml
There are 3 new Boolean switches available since V10.1; in each case setting the value to true will enabled the feature.
OmitDomainUsersGroup β Prevents connection to Domain Users Group at level defined by ldapBase. This connection is required for Domain Security.
UseSeededDataOnly β Build security cache from Process Data response instead of building full Active Directory cache.
SkipPrefetch β Skip initial load from Active Directory the first time SOLR starts.
To enable all these features, add the following lines (carefully inserted just before the closure of the lst section) and then save AspenSearchSolrSecurity.xml file:
<bool name=omitdomainusersgroup>true</bool>
<bool name=useseededdataonly>true</bool>
<bool name=skipprefetch>true</bool>
At this point, the Tomcat log files are obsolete given the configuration changes made so you may like to consider moving or deleting the contents of the logs folder: \<Tomcat>\logs\
Restart Apache Tomcat service in services.msc
Wait until you can locate the \<Tomcat>\logs\AspenSchedulerStartUp.log file with a new entry confirming AspenSearchDeployer completed deployment operations. If AspenSearchDeployer does not deploy then see knowledge base article: KB How-to-resolve-aspenONE-Process-Explorer-Admin-status-error-A-connection-with-the-server-could-not-be-established-Error-code-500
Go to RESCAN section below
RESCAN
Regardless of aspenONE Process Explorer web server version, you should now open aspenONE Process Explorer Admin
At this point you will be presented with (red font) status error (Error code: 415):
Ignore the status errors and click the Start Scan button regardless. The scan should commence without error and soon you will see that the status error at the top is replaced by (green font) Search Engine Status : Running and Configured
Once the scan has completed (and assuming no significant errors) you are now ready to test for acceptable performant operation of search throughout aspenONE Process Explorer. Your A1PE users should also be able to confirm only relevant tags (according to their AFW Security Roles) are being presented to them in the search results.
Keywords: Unresponsive
Connection timed out
The operation was timed out
A1PE wait spinner never stops when using Search for Everything page
Search
Performance
Domain Security
References: None |
Problem Statement: During conversion process of a Proll file into an Aspen HYSYS; the converter encounters a consistency error related to a value that is been specified and calculated at the same time:
That makes that Aspen HYSYS solver is automatically turn on by design and the Proll Converter turn off it via back door when converting any columns. | Solution: When Aspentech developed the converter, this issue was identified and an option ignore column solving during converting was added, this option reduces the inconsistency error in the test input files (around 100) at that time, so it was decided to not set Aspen HYSYS as on hold in Proll conversion it's better for customers to see many of the Unit Operations calculated successfully after conversion.
Unfortunately, this issue cannot be fixed in Proll converter side. But a workaround is to modify the original INP file by removing an HX to avoid the above inconsistency error. You can use the converted .HSC case as a startup file and add back the HX manually.
Keywords: Proll, Converter, Consistency error
References: None |
Problem Statement: It is common that users want to see results directly in the flowsheet to save time, however, pasting the stream report in the flowsheet might me unnecessary as the desired properties are little, thus a custom table might be the | Solution: .
Solution
Custom table option is located on Flowsheet folder in the navigation pane, there you can create a custom table clicking on new. It will ask you for a name.
Then you can add properties to the table by copying and pasting the properties that you want in the table, or you can use drag and drop the variables that you want to add. The variables can be either inputs or calculated variables.
As you can see the default names in the table are default generic names, you can edit them to identify easier, also the units of the variables can be changed directly in the table.
Finally, this table can be pasted directly in the flowsheet or just as an icon if you want to save space. Also it can be sent to Excel.
Keywords: Custom tables, report variables, flowsheet results
References: None |
Problem Statement: In order to generate templates that show the information we need about the process, we can add properties and change the order in which they are displayed, however, in occasions it is needed to add properties contained in property sets created for the simulation, but the option add Property Set appears disabled. | Solution: Once the property set that is to be used is correctly created and the properties properly defined, then we can go to Setup in the Navigation pane and go to Report Options, then go to Stream tab and finally click on Property Sets.
A window will prompt, there you can select the property sets that you want to add to the stream report, also there you can create a new one in case you have not created any property set before.
Now, when adding more properties to the stream report you will see the Add Report Prop-set available, click on it and it will display the properties added by the property set. This will add the stream report, which you can save as a template and even select it to be used as the default template.
Keywords: Property set, stream report, add properties, template
References: None |
Problem Statement: It is common that customers want to paste the stream report in the flowsheet in order to see the information of the streams faster just looking at the flowsheet. Here it is shown how to do this. | Solution: Pasting the stream report in the template is quite easy, just go to the stream report that you want to paste in the flowsheet and click on Send to Flowsheet option in the Stream Summary tab.
Then it is going to ask you if you want to synchronize the streams I the report, so that it gets updated every time the simulation runs, and if you want to synchronize the template.
After clicking on OK it is going to ask you to save the stream report as a template, so that this template can be edited and customized
Then, the stream report is going to be pasted in the flowsheet as an image, it cannot be edited directly in the flowsheet, but double clicking on it will take you to the stream report.
Besides, remember that you can customize the template clicking on Select Properties in Stream Summary tab, where you can chose what properties will be displayed and even the order to show the properties.
Keywords: Stream report, flowsheet, stream results
References: None |
Problem Statement: When using Column internals in order to calculate the hydraulics of the column, it is common to compare the results obtained form the Column Internals results to the results from the Column profile, then a difference is notorious, the reported flows are different.
As it can be seen, the flow reported for stage 4 in Column Profiles is reported in Stage 3 in Column internals, and it can be observed for other stages. | Solution: The explanation for these different values reported by tray is because the values for the vapor are reported with two different approaches, one is Vapor from the selected stage to the previous tray, and the other approach is the vapor to the selected stage.
For instance, observe stage 4, we can see that the flow reported by the column performance is in stage 3 in Column internals results, this is because Colum internals is reported for stage 4, the vapor flow that is flowing to the stage 4 from the upper stage, whereas in Column Profiles, stage 4 reports the vapor flow flowing from stage 4 to stage 3. See the following image.
Besides, remember that there are two different approaches to report the flow in the stages:
Total: Represents all the material balance in the stage, considering draws, pump arounds, side draws, etc.
Net: Represent only the flow that is leaving the stage
Column Internals report Total flows, thus it is going to report different flows if in there are vapor streams associated to the previous stage. Colum Profiles are only going to report the Net flows.
Keywords: Vapor flow, Column internals, Column Profiles, by stage
References: None |
Problem Statement: For some specific reactions, users can specify the heat of reaction according to experimental data, however, when specifying the value it requires a selection of the | Solution: The answer depends on the reference of enthalpy of formation of the components, commonly these values are reported for gases, and to consider the enthalpy of formation of liquids we must substract the enthalpy of vaporization of the component, so that to enter this value you must know the phases in the reactions at the conditions used to measure the heat of reaction. Then the answer is as follows:
Vapor reference
Use this reference if the heat of reaction was measured considering only gases in the reacting system.
Liquid
Keywords:
References: Phase.
Heat of reaction is calculated by the difference of enthalpy of formation of products and reactants multiplied by the stoichiometric coefficient of the components involved in the reaction at standard conditions, and this heat of reaction is considered in the energy balance of the reactor automatically.
However, in some cases this heat of reaction might be different from the value that users are expecting, since users know this value from an experiment, literature or another database, and in those cases it is adequate to specify the heat of reaction, doing this will not affect the energy balance of the RStoic, but the duty, the question is: what reference phase should we use? |
Problem Statement: When creating an Oil Manager blend, it is common to test it in the simulation environment, and in some occasions we can observe problems about the flashing in the results. For instance, in the following screenshot we can see that when mixing the Oil stream with water, the temperature of the mixture decreases too much.
Also, if we analyze the Oil stream, we can see two liquid phases, which can be indicating that something is wrong. | Solution: This problem might be caused by the inputs, although they are not necessarily wrong, they could have caused a problem with the calculations. So we can go to the Properties environment and review the properties of the Blend.
In this case, analyzing the Property Plot tab, we can see how the Critical pressure of the Blend decreases linearly close to the 100% Liquid volume. So, given this evidence we can go to Oil Manger option and go to the Correlation Sets tab, there is a Correlation set create by default, then we can create a new one clicking on Add.
The name of the new correlation set is editable. In this new correlation you can see all the default correlations, just change the Critical Pressure correlation to Rowe.
After doing this, the correlation in Input Assay and in Output Blend options can be changed to the new one, hence, the Assay has to be Calculated again. This will turn the simulation to On Hold mode, just activate again the simulation and the problem about the incorrect results is solved.
The correlation that should be changed depend on the results obtained from the Property plots, and general results of the blend. To review more information about the different correlations that can be used, please go to the Help guide in the section Oil Methods and Correlations.
Keywords: Critical Correlations, Correlation set, Oil Manager
References: None |
Problem Statement: This KB article explains how to register the IQModel add into the COM Add-Ins | Solution: The IQModel add in is only supported for Excel 32 bit version
Please try the following
Navigate to the following folder C:\Program Files (x86)\AspenTech\APC\V10\Builder\Library
Confirm that the iqmodel.xla is installed
In Excel go to File>options>Add ins Go
Click on browse and a file explorer will open.
Navigate to the folder in step one and register the add in
The tab should be available now. Remember that you must open Iqmodel along Excel before using the Add in.
Keywords: Excel Add In, IQModel, IQ
References: None |
Problem Statement: We received some reports that commuted license key at client in Azure VM, the client application can't be invoked with SLM Configuration Error. | Solution: This issue is caused by SLM components conflict with COM ports in Azure VM, disabling the COM ports in Device Manger can resolve this problem.
The steps are:
1. Right click the Start Menu at left-button or press key Win+X.
2. Choose Device Manager from the context menu.
3. Expand the category Ports (COM & LPT), select all COMx (x is the digital number) and pick menu Action > Disable device.
4. Return all commuted license keys and reboot the PC.
5. Open SLM Commute utility and commute the necessary license keys.
Contact our support team if this workaround doesn't work.
Keywords: Azure, SLM, Commute
References: None |
Problem Statement: Frequently Asked Questions: Working with AspenTech software remotely from home | Solution: What are my options for accessing the AspenTech software?
Local installation on your computer. See below for licensing options.
Remote Server hosted and managed by your company, such as Citrix, Windows Terminal Services, or others.
Cloud environment provided by your company. The AspenTech software is supported on Azure, AWS and Frame. Visit our Cloud Deployment guide for more info.
What are my licensing options for running the AspenTech software from a local computer?
(does not apply if accessing the software remotely through Citrix or the Cloud)
Use a Virtual Private Network (VPN) to connect to your companyβs license server while you use the AspenTech software. Licenses will be checked out while you use the software, and returned once you close the AspenTech application. Through VPN, licensing operates the same way it would if you were on your companyβs network.
Commute licenses while you are connected to your companyβs network directly or use VPN. Commuting allows you to borrow licenses to be used when you are disconnected from your companyβs network for up to 30 days. For more information on commuting, see this article.
Your companyβs license administrator may be able to request a temporary license on your behalf through the Support Site.
What are the network requirements for connecting to the license server through VPN?
Network ports: 5093 UDP and 5093 TCP.
Bandwidth: MTU set to 700 bytes or lower.
Latency (ping time): 300 ms or less, with 0% loss.
How do you speed up the connection to the License Server?
If you are using V11 or older, follow these steps to optimize the connectivity.
Do you need to download the AspenTech software?
You can download it from the Download Center.
Do you need a copy of your existing license?
You can request a copy by submitting a request through the Support Site.
Do you need technical support with installing or troubleshooting a product or licensing issue?
Contact us via chat, phone, e-mail or web.
Keywords: Remote Support
Homeworking
AspenTech Support
COVID-19
VPN
References: None |
Problem Statement: At Aspen Watch server, the necessary files cimio_setcim_dlgp_x64.exe or cimio_setcim_hist_dlgp_x64.exe are missed in folder C:\Program Files (x86)\AspenTech\CIM-IO\io\cio_set_cim\cimio_setcim\, and can't start CIMIO for IP.21 interface. | Solution: The root cause for this problem is we migrated IP.21 from 32bits to 64bits edition from V12.0, and the 64bits files of CIMIO for IP.21 interface are not installed with the installation kit issue.
Follow is the steps for workaround:
1. Re-invoke the installation, and choose Install aspenONE products.
2. At Product Selection step, expand aspenONE Manufacturing Execution System category, and select Aspen CIM-IO Interfaces.
3. Continue the installation and reboot when it's completed.
Now, you should can start the CIMIO for IP.21 interface. Contact AspenTech technical consultant if this workaround doesn't work.
Keywords: Aspen Watch, CIMIO, IP.21, Interface
References: None |
Problem Statement: How can the Aspen InfoPlus.21 server name be written to a local variable in Aspen SQLplus? | Solution: Use this code:
local myservername;
myservername = (select line from (system 'hostname'));
write myservername;
Note: If the SQLplus Query Writer is run on the IP.21 server itself the command will return the name of that server. If the SQLplus Query Writer is run on a client / end-user system the command will return the name of the IP.21 server to which the SQLplus Query Writer is connected, not the local / client / end-user machine.
Keywords: None
References: None |
Problem Statement: Unable to retrieve ProMV model prediction results stored in the local InfoPlus.21 (IP.21) through OPC-DA. | Solution: Launch Aspen SQLplus.
Select the IP21 server that resides on the ProMV Server.
Execute the following SQL Statement
update IP_TagsBranch set PE_#_OF_OBJECTS = 8;
update IP_TagsBranch set PE_Description[5] = 'AA_MVDef';
update IP_TagsBranch set PE_Branch[5] = 'AA_MVDef';
update IP_TagsBranch set PE_Description[6] = 'AA_MVBatchDef';
update IP_TagsBranch set PE_Branch[6] = 'AA_MVBatchDef';
update IP_TagsBranch set PE_Description[7] = 'AA_MVMonDef';
update IP_TagsBranch set PE_Branch[7] = 'AA_MVMonDef';
update IP_TagsBranch set PE_Description[8] = 'AA_MVBatchMonDef';
update IP_TagsBranch set PE_Branch[8] = 'AA_MVBatchMonDef';
Launch Aspen InfoPlus.21 administrator. Expand PE_BranchDef. Under IP_TagsBranch, make sure these 4 items exist:
AA_MVDef
AA_MVBatchDef
AA_MVMonDef
AA_MVBatchMonDef
Keywords: InfoPlus.21
IP21
ProMV
OPCDA
References: None |
Problem Statement: After changing the original name of the Aspen Unified server some post-configuration steps must be done. These changes are required to enable Aspen Unified to work correctly after renaming the server.
For this example, the machine is in WORKGROUP and not in any domain. | Solution: After renaming server of name, please follow below steps to enable Aspen Unified.
Open a file explorer and go to the next path C:\ProgramData\AspenTech\AspenUnified, then edit the AspenUnified.config file with a text editor program such as Notepad or Notepad++ to update the new server name on the value of key MasterDatabaseServer. For this example, the server has been renamed to GDOT.
Delete MasterConfigCache.json and MeshSettingsCache.json under C:\ProgramData\AspenTech\AspenUnified, these files will be re-created after restarting the Aspen Unified Agent Supervisor Service.
Open Microsoft SQL Server Management Studio and be sure to connect to the corresponding database of the renamed server.
Open the Security folder and then Logins, make sure that NT AUTHORITY\Authenticated Users and NT AUTHORITY\NETWORK SERVICE logins exist.
Make sure that both logins have the sysadmin role box checked right-clicking on the user and click on Properties and then going to Server Roles. If the box is not checked, check it.
Open Databases and locate AUMaster to update values in master database.
Open the database and open the Tables folder. To edit a table, right-click on it and select the Edit Top 200 Rows option. Make sure not to put extra spaces at the end.
Edit the aumaster.GlobalRoleMembers with the AccountId of the new domain.
Edit the aumaster.GlobalSettings with the SettingId of servers and the Value with the direction.
Edit the aumaster.RegisteredDatabases to update the DatabaseServer and the ConnectionString columns of the new domain.
On the Services window, stop the Aspen Unified Agent Supervisor Service and IIS Admin Service. Then, start the IIS Admin Service and the Aspen Unified Agent Supervisor Service in that order.
Keywords: Aspen Unified, GDOT, renaming server
References: None |
Problem Statement: Aspen HYSYS V12 Stream Reporter (HSR 1.7.3) | Solution: HYSYS Stream Reporter (HSR) is an Excel spreadsheet utility that allows to import to a spreadsheet the material stream information such as conditions, properties and compositions, and also compare streams from different cases.
HSR can report properties from the following phases: Overall, Vapour, Light and Heavy (Aqueous) Liquid, Combined Liquid and Solid. It also allows stream user variables and property correlations to be reported. It is also possible to create formulae in the output table. The user can save sets of properties or use one of the pre-built property sets. Streams from different HYSYS cases can be reported in the same stream table. Once a stream table has been generated it can be updated by pressing a single button. Stream tables can be moved to another Excel workbook whilst maintaining the ability to be updated.
HSR takes the form of an Excel spreadsheet file with embedded Visual Basic for Applications (VBA) code that demonstrates how HYSYS can be accessed programmatically. The VBA source code is freely accessible and users are encouraged to learn from it and adapt it to their own needs.
For V11.0 please see the KB Article 056528.
For V10.0 please see the KB Article 056331.
For V9.0 please see the KB Article 057415.
For V8.0 - V8.8 please see the KB Article 057412.
For older versions see the KB Article 054553.
Note
This Automation application has been created by AspenTech as an example of what can be achieved through the object architecture of HYSYS. This application is provided for academic purposes only and as such is not subject to the quality and support procedures of officially released AspenTech products. Users are strongly encouraged to check performance and results carefully and, by downloading and using, agree to assume all risk related to the use of this example. We invite any feedback through the normal support channel at [email protected].
Keywords: HYSYS Stream Reporter, HSR
References: None |
Problem Statement: Prior to V10 CP3, there was an inconsistency in the variable status handling for disabled limits. When the Steady-state ECE for both high and low limits are 1.0e+6, the CV status becomes βPred Onlyβ. However, a CV with high and low limit ranks of 9999 would show a status of βNormalβ. From the controllerβs perspective, both CVs would be treated internally as prediction only. In V10 CP3 and later, both scenarios now result in a CV status correctly showing βPred Onlyβ. If the CV is considered critical, then the controller will no longer be able to turn on. Additionally, if the CV is used in a Composite application, you will not be able to turn the variable on for Composite use. | Solution: The workaround to prevent the 'Prediction Only' status would be to enable one of the limits, set the limit to a safe value, and set the rank to something other than 9999.
Keywords: DMCplus
DMC3
Rank 9999
References: None |
Problem Statement: When opening Aspen Unified page, the next error message shows up preventing the page to open correctly. | Solution: Firstly, check on Windows Services that the Aspen Unified Agent Supervisor service is Running, if not click on Start the service and try again opening Aspen Unified web page.
If Aspen Unified stills not open, go to Microsoft SQL Server Management Studio and connect to the database, click on Security, open Logins and locate NT AUTHORITY\NETWORK SERVICE.
Right click on NT AUTHORITY\NETWORK SERVICE and left click on Properties, the next window will pop-up.
Go to Server Roles and check to sysadmin box.
Click on OK and make sure to click on Save All to keep the changes performed.
Keywords: Aspen Unified, error
References: None |
Problem Statement: This KB article provides guidance for installing the latest CP on a GDOT machine. It also explains on the SLM configuration before and after patch installation. | Solution: When upgrading a GDOT machine to the latest CP, use the following steps:
Launch aspenONE SLM License Manager:
Click on βConfigureβ and βYesβ to the dialog asking if you want to allow this app to make changes. The βSLM Configuration Wizardβ dialog should appear:
Click on βShow Bucketsβ. This launches a dialog that shows the list of available license buckets as well as which buckets are currently active (selected). GDOT will search for a suitable license in the selected buckets. Make sure that the correct buckets are selected. For GDOT this typically means that both the Default bucket is selected and that one of the higher-numbered buckets that contains the GDOT licenses is selected. For example:
Make notes of which license servers and, for each server, which buckets are selected. You will need to check this again after the installation.
Click βDoneβ on this dialog. If you made any changes then click βApply Changesβ on the βSLM Configuration Wizardβ dialog.
Next, in the βSLM Configuration Wizardβ dialog, expand the βAdvanced Settingsβ list of options. The key step here is to make sure that βEnable Broadcastingβ is unchecked (deactivated). If it is checked then uncheck it and again click βApply Changesβ:
Close the dialog.
If you made any changes to the license configuration then reboot your machine at this point.
If there is a GDOT Emergency Patch (EP) or Field Test (FT) installed, you must first uninstall it. This rolls the installation back to the state of either the main release (e.g., V11) or the main release plus previous CP (e.g., V11 CP1). Next run the installer for the new CP. For example, starting with V11 CP1 EP6:
Uninstall EP6 to roll back to V11 CP1
Next run the installer for CP2 (without first uninstalling V11 CP1)
The state of the GDOT install is now V11 CP2 (we successfully installed CP2 on top of V11 CP1)
Make sure you reboot the machine each time you are prompted to do so. Do not wait to reboot later. In particular, do not apply additional patches without first rebooting.
Use the AspenTechn SLM License Manager to check that:
The servers configured before the install are still configured after the install.
For each license server only the expected buckets are selected (i.e., match the configuration you noted before the install).
The βEnable broadcastβ setting is unchecked.
If you need to make any changes then make sure you reboot the machine afterwards.
Keywords: GDOT, CP, SLM, buckets, enable broadcasting
References: None |
Problem Statement: There is a defect in V12 APC Online where, after deploying and starting an RTE controller from DMC3 Builder, one or more of the following symptoms may be seen:
In DMC3 Builder > Online section or in PCWS > Manage, the Last Run Status hangs (is stuck) and does not update with every cycle
On the PCWS web page, the controller status bubble is red beside its name, the web page has a flashing red outline and a red triangle warning symbol - all indicating that the controller has lost connection and is not updating online or receiving data
The web page shows this message: WARNING: The data for this application is not updating. Contact your System Administrator for help. | Solution: This defect (ID 568020) was originally fixed in V12 EP1 for APC Online, therefore the issue can be resolved by applying this patch: Aspen APC Online V12.0 Emergency Patch 1
You can also reach out to our Support team for access to the latest patches for V12 APC by submitting a case here: Submit a Case
Keywords: V12, dmc3, builder, controller, status, hangs, stuck, fail, red, connection, data, updating
References: : VSTS 568020 |
Problem Statement: Minimum Alert Duration (MAD) is ignored for agents getting data from a CSV historian, alerts are sent out immediately the probability threshold is exceeded. | Solution: The MAD may look like it's being ignored if the Interpolation Mode for the CSV historian is set to None. This is because when the Interpolation Mode is None data points coming into Aspen Mtell may not be evenly spaced out, which can lead to multiple data points in each granule of time (for example, two data points falling within one hour in a data set with hourly granularity).
To resolve this, follow the steps below to change the Interpolation Mode.
1. Log into Aspen Mtell System Manager and click on Configuration > Settings > Sensor Data Sources
2. Select your CSV historian from among your sensor data sources
3. In the Configuration section check to see what Interpolation Mode is selected. If it is currently set to None select either Linear or Stair Step. If unsure of which to select then select Stair Step. Click Save
Keywords: Alarm
Minimum alarm duration
References: None |
Problem Statement: Why is the column diameter calculated with Tray-Design in a RadFrac block significantly higher for Koch Flexitrays compared to other tray types (Sieve, Bubble Cap, Glitsch Ballast, etc ...) ? | Solution: By default RadFrac uses the Koch Flexytray Bulletin 960. However, the most recent version is 960-1. The new version should be used.
To specify that Bulletin 960-1 should be used, go to the RadFrac Convergence | Convergence | Advanced tab and select 960-1 in the Flexi-Meth drop-down list.
In the attached example:
Tray Sizing Method Diameter
----------- ------------ --------
1 Sieve Tray 2.95 m.
2 Koch Flexitray - Bulletin 960 3.47 m.
3 Koch Flexitray - Bulletin 960-1 3.00 m.
Keywords: RADFRAC
Tray Sizing
Koch Flexytray
References: None |
Problem Statement: When trying to register IP21DAManager.dll in below folder,
C:\Program Files\AspenTech\MES\ProcessData for 64-bit
C:\Program Files (x86)\AspenTech\ProcessData for 32-bit
following error message is encountered.. | Solution: Check that the correct bitness of command prompt (cmd.exe) is bing used to register the IP21DAManager.dll for 32-bit or 64-bit. 32-bit command prompt can be launched from C:\Windows\SysWOW64 folder and 64-bit command prompt can be launched from C:\Windows\System32 folder.
Note: Command prompt will need to be launched using Run as administrator.
Check that C:\Program Files\Common Files\AspenTech Shared\ and/or C:\Program Files (x86)\Common Files\AspenTech Shared\ exist in PATH environment variable.
Go to Control Panel | System and click on Advanced System Settings. In the Advanced tab, click on the Environment Variables... button. Select Path in System variables section and click on the Edit... button.
Keywords: The module IPDAManager.dll may not compatible with the version of Windows that you're running. Check if the module is compatible with an x86 (32-bit) or x64 (64-bit) version of regsvr32.exe.
References: None |
Problem Statement: When trying to pull long periods of data using the Historical Values in the Excel Add-in on Microsoft Excel 2013, the user gets #VALUE! in the cells the data was supposed to be showed, although receives a βSuccessβ message (see image below). This document explains why this happens and how to overcome this issue. | Solution: This happens because Microsoft Office 2013 has limitation of 65536 rows for array formula. So, for instance, if the user systemβs pulls data in a tag for every 1 minute, that means 65536 rows can hold up to around 45 days of data, thus Excel shows the error #VALUE! when that limit is reached.
In order to avoid that, there are two options:
The user can either upgrade to a more recent version of Microsoft Office (the 2016 version, has increased the limit to more than 1 million rows) or
Uncheck Output results as an array in the Advanced⦠option that appears on the Excel Add-in right left menu. This way, the output can be the same as Excel worksheet's max size and the user can still use his/her same Office version. To uncheck that, these are the instructions:
After clicking on Historical Values in the Aspen Process Data tab, a menu will show up on your right side of the screen. Click on Advancedβ¦:
Uncheck the box that says Output results as an away and then click on OK:
Close Excel, reopen it and pull the data using Historical Values again.
Keywords: #VALUE!
Historical Values
Array formula
References: None |
Problem Statement: There is a need to find the Priority setting of all the tasks set up in Aspen InfoPlus.21 Manager. However, clicking on each task individually is too tedious and cumbersome. | Solution: By using SYSTEM command in Aspen SQLplus to execute REG QUERY, it is possible to find the Priority setting for each task defined in Aspen InfoPlus.21 Manager.
Note:
The SQL script is to be run only on IP.21 server.
The account used need to have permission to run REG QUERY.
The account must belong to a role which has permission to execute SYSTEM command.
Below is the SQL script to list out of CPU affinity of each task. A copy of ListPriority.txt containing the SQL script is also attached to knowledge based article.
DECLARE LOCAL TEMPORARY TABLE MODULE.TMP
(TaskName_t CHAR(30));
DECLARE LOCAL TEMPORARY TABLE MODULE.Tasks
(TaskName CHAR(30), ProcessPriority CHAR(15));
LOCAL ver CHAR(5);
LOCAL tmp CHAR(100);
ver = 0.0;
FOR(SELECT LINE l FROM SYSTEM('REG QUERY HKEY_LOCAL_MACHINE\SOFTWARE\Aspentech\InfoPlus.21 /V Version')) DO
tmp = TRIM(l);
IF(POSITION('Version' IN tmp) = 1) THEN
ver = SUBSTRING(3 OF tmp);
END
END
SET LOG_ROWS 0;
tmp = 'HKEY_LOCAL_MACHINE\SOFTWARE\Aspentech\InfoPlus.21\' || ver || '\group200\RegisteredTasks';
INSERT INTO MODULE.TMP (TaskName_t)
SELECT SUBSTRING(8 OF LINE BETWEEN '\') FROM SYSTEM('REG QUERY ' || tmp || '');
DELETE FROM MODULE.TMP WHERE TaskName_t = '';
FOR (SELECT TaskName_t FROM MODULE.TMP) DO
INSERT INTO MODULE.Tasks
SELECT TaskName_t, SUBSTRING(4 OF LINE) FROM SYSTEM('REG QUERY ' || tmp || '\' || TaskName_t || ' /v ProcessPriority');
END
DELETE FROM MODULE.Tasks WHERE ProcessPriority = '';
UPDATE MODULE.Tasks SET ProcessPriority = '0 - Low' WHERE ProcessPriority = '0x0';
UPDATE MODULE.Tasks SET ProcessPriority = '1 - Normal' WHERE ProcessPriority = '0x1';
UPDATE MODULE.Tasks SET ProcessPriority = '2 - High' WHERE ProcessPriority = '0x2';
UPDATE MODULE.Tasks SET ProcessPriority = '3 - VeryHigh' WHERE ProcessPriority = '0x3';
UPDATE MODULE.Tasks SET ProcessPriority = '4 - Critical' WHERE ProcessPriority = '0x4';
SELECT * FROM MODULE.Tasks ORDER BY ProcessPriority DESC;
Keywords:
References: None |
Problem Statement: Example to achieve the Pure Sulfur separation from the mixture of CS2, H2S, SO2 mixture. | Solution: This application is part of the sulfur process, where column could be used to separate sulfur from the mixture of CS2, H2S, SO2 mixture.
In the attached example, model is for the liquid sulfur as a single component βsulfurβ. Based on saturated vapor pressure for the apparent sulfur component.
Aspen Plus data regression tool is utilized to fit PLXANT (Extended Antoine vapor pressure equation) to match this data with extremely close agreement to the data in the released articles.
Solubility of sulfur in CS2, H2S, and SO2 is referenced as the data for solubility of sulfur in CS2 to fit the Gibbs energy of solid sulfur.
H2S solubility in liquid sulfur in the NIST database was available & applied with some modifications based on the experience to fit the Henry law coefficient for H2S in Sulfur.
Assumptions made:
Henry binary parameters for H2S/Methane was not available, copied Henry binary parameters pair from H2S/Pentane to avoid missing property parameter error.
Henry binary parameters for N2/Sulfur was not available, copied Henry binary parameters pair from N2/C2S. This parameter controls the apparent solubility of nitrogen in liquid sulfur.
Above parameters could be regressed using experimental or literatures data but for this specific example assumptions are made as above.
It is assumed that the solubility of elemental sulfur in water is extremely low, this is controlled by the NRTL binary parameter for water/sulfur pair. For convenience, user can assume:
Tij = Aij + Bij
Let Aij = 0
Let Bij = Bji
Attached model has Bij = 750. user can change this from the property environment if user wants to force lower water content.
Keywords: Sulfur separation from the mixture of CS2, H2S, SO2 mixture, Pure Sulfur
References: 1. THE VAPOR PRESSURES OF SULFUR BETWEEN 100Β° AND 550' WITH RELATED THERMAL DATA* BY
WILLIAM A. WEST AND ALAN W. C. MENZIES
2. Solubility of Sulfur in Liquid Sulfur Dioxide, Carbon Disulfide, and Carbon Tetrachloride
JANE M. AUSTIN, DAN JENSEN, and BEAT MEYER1 Chemistry Department, University of Washington,
Seattle, Wash. 98105 |
Problem Statement: How to change the APC Performance Monitor server name? | Solution: 1. Ensure you're logging with administrators account.
2. Change server name of InfoPlus.21 folow the KB Executable that edits the file paths in the historian config.dat file - h21chgpaths.exe.
3. Update ADSA configuration follow this KB: How do I change the computer name or nodename of the Aspen InfoPlus.21 server?
Note: If you installed PCWS (APC Web Server) in another server, you also need to update the ADSA configuration at PCWS.
4. Update the ODBC connection SQLplus on localhost in System DSN, change the TCP/IP host to new server name.
5. Start InfoPlus.21 database server in InfoPlus.21 Manager.
6. Invoke InfoPlus.21 Administrator, expand the node InfoPlus.21 > %ServerName% > Definition Records > AW_CTLDef, update the AW_HOSTNAME to new server name for each records.
7. Invoke Watch Maker, check whether it connects to database, and list the collected controller at the table.
Keywords: Performance Monitor, Watch, Rename
References: None |
Problem Statement: The plot Plan layout tool works with Aspen Basic Engineering to allow transfer and editing of plot plan coordinate data between ACCE and ABE for two way. | Solution: The step by step procedure for an example is described in attached pdf file.
Keywords: None
References: None |
Problem Statement: Why can I not open an APS Access Database model even if I have installed a version of Microsoft Access? APS uses OLE DB technology to read and write a model in Access Database with file extension .mdb or .accdb. The OLE DB component is installed with a Microsoft Access Database Engine (called Microsoft Access 2013 Runtime for Office 2013). Note that you may still be able to open an Access Database model with an ODBC dsn file without OLE DB.
You may encounter an error when attempting to open an Access database even though you previously installed the Microsoft Access Database Engine and could open the model. A Microsoft update may corrupt the registry values for an existing Access Database Engines, causing the inability to open an existing Access database model. | Solution: We recommend you:
Un-install the Microsoft Access Database Engine.
Download and install the 32bit version of Microsoft Access 2013 Runtime from Microsoft.com. Because APS is a 32bit application, you should download the 32bit version no matter if your machine has 32bit or 64bit Office.
You should be able to open you model after installing the Access 2013 Runtime from the following location:
https://www.microsoft.com/en-us/download/details.aspx?id=39358
If you install Access Database Engine 2016
If you install Access Database Engine 2016 instead of Microsoft Access 2013 Runtime, you may see a message similar to the following:
βOffice 16 Click-to-Run Extensibility Component 64-bit Registration prevents Office 365 32-bit installationβ.
Use the following steps to uninstall the 64bit Office 16 Click-to-Run Extensibility Component:
On your keyboard, press Win + R to open the Run window.
In the Open field, type βinstallerβ and click on OK. The installer folder displays.
Right-click in the column headers to view columns to display.
Click More β¦ and click Subject in the Details pane.
Click OK to display the Subject column.
Sort on the Subject column and scroll down to find the item with βOffice 16 Click-to-Run Extensibility Component 64-bit Registrationβ as the Subject.
Right-click on the associated line for the MSI file and select Uninstall.
After uninstalling, you will then be able to download and install the 32bit Access Database Engine 2016 from the following location:
https://www.microsoft.com/en-us/download/details.aspx?id=54920
Keywords: None
References: None |
Problem Statement: This Knowledge Base article shows how to link a particular Aspen mMDM hierarchy (called Corporate2) to an Operations Navigator page in Aspen Role-Based Visualization. | Solution: Please download the attached file titled:
Linking_an_mMDM_hierarchy_to_an_OpsNav_page.doc
which provides step-by-step instructions.
Keywords: OpsNav
ODM
References: None |
Problem Statement: The V7.3 package of DVDs could contain as many as three (3) DVDs with DVD 2 as part of their label.
The three DVDs would be labeled as DVD 2T, DVD 2 (32-bit) and DVD 2 (64-bit)
Which one should be used to upgrade (or newly install) Aspen Role-Based-Visualization (RBV) | Solution: Aspen Role-Based-Visualization (RBV) actually exists on all three DVDs for different situations.
DVD 2T is the one to be used if using the all-inclusive token-based licensing model that was introduced in July 2009. It can only be used with token based run-time license keys. See our article 129084 for more details on how to see if you have token licenses. If not using token-based licensing then the choice between the 32-bit and 64-bit DVDs depends on the Operating System(O.S.) of the PC. If upgrading from a version earlier than V7.3, on the same PC, then the only choice is to use the 32-bit DVD. This is because this product was not supported on a 64-bit system until V7.3 If newly installing on a 32-bit O.S. then again the 32-bit DVD would be used. However, if newly installing (via an upgrade or not) on a 64-bit O.S. then the 64-bit DVD should be used. Remember also that the version of Microsoft Sharepoint is also dependent on the Operating System.
Keywords: RBV
Upgrade
Install
References: None |
Problem Statement: To improve performance, many of the user interfaces for Aspen Operations Domain Model (ODM), such as the ODM Advanced Editor, will temporarily cache data when reading from the database. The attached document discusses several mechanisms that cause ODM to release the cached data to cause the data to be read again. | Solution: Please download and review the attached document titled: Aspen ODM Configuration Guideline - Changing the Default Data Cache Settings.
The document has been prepared by AspenTech development team.
Keywords: None
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the following MMDM Validation errors:
Item (...) referenced by property BPCIdentifier of item (.....) could not be found
which may be encountered in the Publish Form when generic items are deleted in the Aspen Manufacturing Master Data Manager database without cleaning up the references first. | Solution: At the present time, mMDM does not provide built-in referential integrity. It is left up to the user to manage dependencies between various definitions. To resolve this situation, currently there is only oneSolution, which is for the user to undelete the definitions that are causing the orphaned references.
The trick is finding the item to undelete. Here are some guidelines:
If you see bracketed numeric references, such as {100000, 1, 100018}, from this you can decipher the definition type. Use the table below to determine which component and collection the item belongs to. In some cases the error message will provide clues, such as AliasDefinition with ID: '100010'.
Keywords:
References: d DomainNamespace with ID '100005' not found. This message informs you that the missing item is a DomainNamespace (aka. Alias Domain), which is found under the Alias folder in the mMDM Editor.
Once you have identified the component and collection, you must undelete the item, as follows:
1. Use the mMDM Editor to navigate to the folder that contains the collection.
2. After selecting the collection folder, select the View | Filter menu.
3. From the Filter window, enable the Show only deleted definitions checkbox.
4. Close the informational dialog box, if it appears.
5. Now you should see a list of past deleted items in the grid. You can right-click on the item, then select Undelete.
6. This will show the Lifespan Editor window, which allows you to configure the desired end time for the item. For most cases, you should accept the default end time, so simply click OK.
7. The item is now undeleted, and will disappear from the deleted list. To return to the normal grid list, either use the Filter window to uncheck the Show only delete definitions checkbox, or simply right-click in the grid to show the context menu, then select Clear Filter. (Note: simply selecting a different collection will also automatically clear the filter.)
8. Once all of the orphaned references have been undeleted, you should then remove any references to them held by other items. For example, regarding aliases, you should use the Alias editor to remove any alias entries that refer to the item. If you plan to delete a Domain Namespace, then you should ensure that there are no aliases defined for that namespace in the Alias Editor.
9. After all references have been removed, then you can go back and safely delete the items again that we had just undeleted, since they no longer will result in orphaned references.
Here is the table of component and collection identifiers:
Component ID
Collection ID
Name
Notes
10000
Alias
1
Alias
This collection is hidden in the Editor and is only indirectly viewable via the Alias Editor.
2
Alias Domains
These are also known as Domain Namespaces.
3
Alias Domain Type
150000
Allocation
1
Allocations
2
Allocation Statuses
This collection is hidden in the Editor and is viewable when configuring allocations.
3
Allocation Values
This collection is hidden in the Editor and is viewable when configuring allocations.
20000
Attribute
1
Attributes
180000
Business Component
1
Business Processes
2
Global Data
3
Locales
200000
Business Party
1
Business Parties
2
Business Party Types
130000
Class
1
Classes
2
Calculations
3
Calculation Groups
This collection is hidden in the Editor.
4
Calculation Engines
This collection is hidden in the Editor.
210000
Document
1
Services
2
Hosts
3
Processors
160000
Dynamic Data
1
Dynamic Data Sources
2
Dynamic Data Source Types
This collection is hidden in the Editor and is viewable when configuring dynamic data sources.
3
Dynamic Data Communication Types
This collection is hidden in the Editor and is viewable when configuring dynamic data sources.
4
Dynamic Value References
This collection is hidden in the Editor and is viewable when configuring dynamic data sources.
5
SQL Queries
30000
Equipment
1
Equipment Items
2
Equipment Types
100000
Generic
1
Generic Items
2
Generic Item Types
40000
Hierarchy
1
Hierarchies
2
Hierarchy Levels
50000
Location
1
Locations
2
Location Types
60000
Material
1
Materials
2
Material Types
140000
Material Lot
1
Material Lots
2
Material Sub-lots
This collection is hidden in the Editor and is viewable when configuring material lots.
3
Material Lot Statuses
This collection is hidden in the Editor and is viewable when configuring material lots.
170000
Personnel
1
Personnel Items
190000
Portfolio
1
Benches
2
Books
3
Chains
80000
Physical Property
1
Physical Properties
70000
Price
1
Commodity Prices
2
Commodity Types
3
Market Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
4
Delivery Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
5
Temporal Types
6
External Source Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
7
Calculation Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
8
Price Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
9
Quality Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
10
Range Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
11
Input Types
This collection is hidden in the Editor and is viewable when configuring commodity prices.
12
Price Equations
90000
Unit of Measure
1
Unit of Measures
2
Quantity Types
3
Unit of Measure Sets
120000
User
This component is hidden in the Editor.
1
Users
This collection is hidden in the Editor and managed internally by mMDM.
110000
Version
1
Versions
This collection is hidden in the Editor and is viewable when publishing the data cache.
NOTE: AspenTech development is currently working on an enhancement that would prevent users from deleting definition items without first deleting any dependencies between various definitions. |
Problem Statement: mMDM (formerly known as ODM) provides several config files under \AspenTech\Enterprise\CDM\Configuration. Some have extensions of ?.config? and others have ?.adoconfig?. Those ending with ?.config? are known as workspace configuration files. A workspace describes to mMDM how to connect to a specific data source and which components an application should dynamically load at runtime. The ?.adoconfig? files contain database connection settings to a live relational database (which is ignored if no database is setup for the computer). | Solution: Here is a description of each of the workspace files used by Aspen Manufacturing Master Data Manager :
File Name
Description
Used By
Default_Database
A default workspace for connecting to a live database connection. It obtains the DB connection info via a reference to BPC_Default_SQL.adoconfig.
mMDM Editor
mMDM Bulk Load
mMDM Administrator
Any other client application configured to use this option as configured in the mMDM Administrator via the Data Source Wizard.
mMDM Service (Windows background service)
Default_Files
A default workspace for connecting to read-only XML files that have been published to the computer.
mMDM Editor
mMDM Bulk Load
mMDM Administrator
Any other client application configured to use this option as configured in the mMDM Administrator via the Data Source Wizard.
Default_WebService
A default workspace for connecting to a live database connection. Instead of using the file BPC_Default_SQL.adoconfig, it obtains the DB connection info via a call to an mMDM Web Service on the master mMDM server.
mMDM Editor
mMDM Bulk Load
mMDM Administrator
Any other client application configured to use this option as configured in the mMDM Administrator via the Data Source Wizard.
BPCWebAdmin
A special workspace for connecting to a live database connection. It obtains the DB connection info via a reference to BPC_Default_SQL.adoconfig. This workspace is identical to Default_Database, but is provided separately in case a site wants to customize it for web applications.
mMDM Web Manager
mMDM Web Service
OpsNav
Published
A special ?hybrid? workspace for connecting to both a live database connection and read-only XML files. It obtains the DB connection info via a reference to BPC_Default_SQL.adoconfig. Some components get their data from the database, while others are configured to get data from the XML files.
OpsNav
BPCSubscribe
A special ?hybrid? workspace for connecting to both a live database connection and read-only XML files. It obtains the DB connection info via a reference to BPC_Default_SQL.adoconfig. Some components get their data from the database, while others are configured to get data from the XML files.
mMDM Service (Windows background service) for the purposes of subscribing to published XML files.
The Default_WebService.config file is NOT used to configure the mMDM Web Service. Instead, that config file is intended for mMDM client computers that wish to obtain the database connection settings from the mMDM Web Service running on the mMDM master server. By default, the mMDM Web Service uses BPCWebAdmin.config, which is the same workspace configuration file used by the mMDM web site.
You can change the workspace used by the mMDM Web Service by changing the value for ?WORKSPACE_ALIAS_NAME? in the <appSettings> section of the Web.config file in folder ?\Program Files (x86)\AspenTech\mMDMWebServices_V7.3?. It defaults to BPCWebAdmin, but you could change it to ?Default_File?, for example, to cause the web service to use the read-only XML files. And then perform an IISRESET. This configuration has never been tested, and it will probably fail because the web service will expect a writable workspace.
Keywords: None
References: None |
Problem Statement: This Knowledge Base article provides steps to troubleshoot SmartClient startup and update issues. | Solution: The Smart Client is a mechanism that allows the mMDM Advanced Editor to be accessed from the mMDM (Web) Manager (mMDM?s web user interface).
There are two ways to launch the mMDM Editor. One is via the Smart Client on the mMDM Web Manager. Alternatively, if you are on a computer where mMDM has been directly installed, such as an mMDM Server, you can launch the Editor from the Start menu, directly.
If that still fails, then please search for the assembly named Interop.aspenSecurity in the Global Assemby Cache (GAC) found under c:\Windows\Assembly folder. If not found, then it means that AFW Security is not fully installed. Modify the AFW Security install and be sure to select the feature named ?AFW Security .NET Components?. If the assembly is found, then please right-click to view the Properties, then from the Version tab, confirm that the version is 4.0.0.509.
And last but not least, please deactivate the IE Enhanced Security module which by default is set to Active.
If the above steps do not resolve the issue, try to update the SmartClient on the mMDM server by following the steps below.
The Smart Client can be updated on the computer where the mMDM Manager is located. To update the Smart Client, please perform the following steps:
1. Launch the Smart Client update utility named SetupSC.exe which is found in C:\Program Files\AspenTech\mmDMWebManager_V7.3\SmartClient folder.
2. Press the Update Smart Client button.
3. Press the Close button (X) in the upper right to exit the utility.
4. A new folder will be generated using the naming convention of ?AdvancedEditor_7_3_0_3000?.
5. The SmartClient should now be properly configured.
Due to the wide variety of deployment scenarios (OS versions, certificate issues, etc), unforeseen issues could arise when using SetupSC.exe to update the mMDM smart client configuration. For example, SetupSC.exe uses Microsoft?s certificate capabilities to ?sign? the smart client packages so that they can later be safely downloaded from the mMDM Configuration (Web) Manager to other user computers. Sometimes the signing process can have problems. The SetupSC.exe has some new features, such as logging, which should help to diagnose these kinds of problems.
Keywords: ODM
Operations Domain Model
References: None |
Problem Statement: The ad hoc Aspen Production Record Manager (Formerly Aspen Batch.21) reports can fail with the following error message when run from the Batch Query Tool or Batch Detail Display:
Cannot find 'C:\Program Files\AspenTech\Working Folders\\Batch.21\temporary report.htm'. Make sure the path or internet address is correct.
(Note: For the correct path to be found there should only be 1 slash before the word 'Batch.21' in the path listed above.)
NB: This issue can also be seen when running Windows Server 2008, however the error message will be:
Cannot find 'c:\users\USERNAME\appdata\local\temp\3\Batch.21\temporaryreport.htm' | Solution: The Batch.21 executable appends \Batch.21\temporary report.htm to whatever it finds in the ASPENWORKINGDIRdir registry key. This registry key is located in:
HKLM | Software | AspenTech | Setup
Make sure the contents of this key do not end with a slash. Assuming your install path is on the C:\ drive, the contents of ASPENWORKINGDIRdir should look like this:
C:\Program Files\AspenTech\Working Folders
not like this:
C:\Program Files\AspenTech\Working Folders\
For Windows Server 2008:
1. Ensure you are running the Query tool with Run as administrator option (right click the program to access this option.)
2. Turn off User Access Control.
Keywords: None
References: None |
Problem Statement: Is there a way to get the Business Party name via the reference in the class when getting the values from the class (i.e., without having to do a second query of the Business Parties where the identifier matches)? | Solution: To get the Business Party name via the reference in the class use the following syntax:
ClassValueFilter(ProductionField.Operator).ResolveIdentifier.Name
Keywords: None
References: None |
Problem Statement: This Knowledge Base article answers the following question:
Can MS SharePoint web parts access Aspen Operations Domain Model (ODM) data? | Solution: Data View Web Part is an MS SharePoint web part that can read from many data sources such as MS SQL Server, XML,.. and display data using XSLT. Aspen ODM exposes web service that allows external tools to access ODM data. Therefore, it is possible for the users to configure Data View Web Part to access ODM data thru the ODM Web Service.
In order to see the ODM Web Service's available methods, open your Internet browser and go to the following url:
http://localhost/BPCS95WebServices/BPCS95WebService.asmx
Example:
The GetDataValue method can returns the data value read using Equipment.Equipment.TK101@!Name
Below is a link to a website that shows the steps to configure the Data View Web Part to access web services.
http://www.sharepointblogs.com/ssa/archive/2007/02/23/showing-web-service-data-in-a-data-view-web-part.aspx
NOTE: The Page Viewer Web Part is an AspenTech Iframe web part. This web part is used to view an external webpage, and it has no knowledge of a data source.
Keywords: webpart
References: None |
Problem Statement: Is there versioning on the hierarchy / mMDM? Is rollback possible? If not, what is the point of versioning? | Solution: There is no versioning by Hierarchy β there is only versioning of the workspace or, more precisely, of the mMDM database where all your hierarchies live. What this means is that every time you make changes to your hierarchy and publish it, a new version of the mMDM database is created containing the changes. It is therefore possible to connect (not rollback) to a previous (or older) version of the database and your older version of the hierarchy. Below is a screen capture showing how itβs done.
Again, this is not the same as rollback because, at this time you cannot edit a past published version. Publishing a new version effectively takes a snapshot of the data and freezes it, becoming immutable. To edit data, you must change back to the working version.
The intent of versioning is to create an official, approved snapshot of the data that can be published out to other computers. Other applications can then generate XML documents (such as BPDs) with the version ID embedded. Later that XML document can be properly processed by referencing the originally published mMDM version, such as when resolving aliased names. So versioning is intended for cases where reference data might get used in an XML message and that message is processed at a future time.
To resolved data corruption issues, you should rely on the database backup tools from the RDBMS vendor, since versioning is not designed as a backup/restore mechanism.
Currently, there is an enhancement request in place to allow a past version to be promoted to the working version. The enhancement, if approved, will be implemented in a future version of mMDM.
Keywords: Updates not allowed. The Version Strategy Type 'Specific' indicates the Workspace is ReadOnly
References: None |
Problem Statement: This Knowledge Base article shows how to display Node Names in Aspen Operations Navigator (OpsNav) tree instead of resource names. | Solution: How to display Node Names in OpsNav instead of resource names
.
In order to display Node Names in Aspen Operations Navigator (OpsNav) instead of resource names you will need to create or load the OpsNavHierarchySettings class (see screen cap below), which needs to be added to the hierarchy in order to display the node name. The steps are as follows:
1. Open mMDMβs editor, then go to the intended hierarchy:
2. Add OpsNavHierarchySettings class, and set βAlwaysDisplayNodeNameβ to True as follows:
3. Save everything and publish your hierarchy.
OpsNavHierarchySettings class:
(The class is available in the attached ODMBaseLoad.XML Bulk Load File.)
Keywords: None
References: None |
Problem Statement: How do I add an mMDM hierarchy to another mMDM hierarchy? | Solution: Let us suppose that we have two hierarchies that we want to join together to make maintenance simpler or to add additional content to one of them.
In our example we will join a hierarchy called Material with a hierarchy called S95.
1. Open each hierarchy and in its Hierarchy Definition Settings set the IsContentMixed property to True
2. Next, navigate to the Hierarchies folder in the hierarchy that you want to add an additional hierarchy to (S95 in our example) and drag that hierarchy (Material in our example) to the Design pane over the top node in our base hierarchy (S95).
3. Select No when presented with the following dialog box to add the selected Hierarchy Definition as a Hierarchy
Keywords: None
References: node:
4. The Material hierarchy has been added to the S95 hierarchy:
5. Save and publish your hierarchy.
6. View your hierarchy in Aspen Operations Navigator. |
Problem Statement: How to resolve the error S95 can't be connected. BPC S95 Web Service is not responding: http://<servername>/bpcs95webservicesanonymous/bpcs95webservice.asmx when trying to use S95 Search feature within Aspen Tag Browser. | Solution: Put the following URL in an Internet Explorer session:
http://<servername>/bpcs95webservicesanonymous/bpcs95webservice.asmx
Verify the following web page displays.
Scroll down to InitializeSession option.
Select the Invoke button.
If a similar error to the following appears, make the necessary changes and test again. In this instance, since SQL Server and mMDM are on the same machine the NT Authority\IUSR account needs to be added to SQL Server.
However, if SQL Server and mMDM are on separate machines then the NT Authority\Anonymous Logon account is required to be added to SQL Server.
Keywords: mMDM
database
S95
BPC
not responding
References: None |
Problem Statement: How to create a nested class item within another class in Aspen Manufacturing Master Data Manager. | Solution: The base class is created as an ordinary class, by right-clicking on the Classes folder and selecting New.
ClassA has been configured as the following:
Now create a ClassB to reference base ClassA where we set ClassA to be an array within ClassB. Note that you need to use the Data Types tab to add basic class data types. When adding another class to the design you must select the Classes tab that is currently selected in the image below. As you can see, ClassA has already been added to ClassB and the icon of ClassA denotes it is a nested class.
After adding ClassA to ClassB, you will need to change the IsArray field under Data Types to True.
Now an item that references ClassB will inherently reference ClassA as well.
Keywords: mMDM
database
S95
hierarchy
parent
References: None |
Problem Statement: This knowledge base article explains the reason for not being able to edit or publish from an older version of a Workspace. | Solution: To fully understand the reasons why it is not possible to publish from an older version of a Workspace, let us first understand mMDMβs versioning concept.
When mMDM is first used, all changes are made to a βWorkingβ version. In fact, this is the only version that is writeable. At some point, the edits to the data model stabilize and it becomes ready for external consumption and distribution. At this point, the user can elect to generate a βPublishedβ version. The purpose is to create an official immutable version (snapshot) of the master data model. Since this snapshot is immutable, it serves as a certification of the data within. The published version is then able to be freely pushed to client-side systems to reference the sanctioned data. Client-side mMDM applications have the option of using a direct database connection to the published version, or receiving a copy in XML form via the EIF pub/sub feature.
The mMDM Publish feature was not intended as a backup/restore mechanism.
To be able to go back to an older, published version of a Workspace, I recommend creating a backup copy of the mMDM database using the database backup tools from the RDBMS vendor, every time you publish a new version of the Workspace.
Keywords: None
References: None |
Problem Statement: After adding items to a hierarchy in Aspen Manufacturing Master Data Manager the following Validation Errors window appears:
Duplicate Identifiers found | Solution: The reason this window appears is because the individual hierarchy properties have been set to not allow more than one kind of item to be added.
This setting is initially set when creating a new hierarchy. In the screen capture below, be sure to select the Allow multiple instances option when creating a hierarchy.
If the hierarchy has already been created, you can change the setting in the lower left-hand corner of the mMDM Definition Editor for a specific hierarchy. Change the IsSingleInstance field to FALSE to allow more than one kind of item in a hierarchy.
Keywords: duplicate, hierarchy, S95, IsSingleInstance, ODM, mMDM, allow multiple instances
References: None |
Problem Statement: When trying to download the AtOdmEditor from the Aspen Operations Domain Model (ODM) Web Manager page a download error is generated. The error says the AtOdmEditor.exe.config file cannot be found. This is an ODM Smart Client download issue on Windows Server 2008 with IIS 7.0.
This Knowledge Base article shows how to properly configure the Internet Information Services (IIS) 7.0 to allow the AtOdmEditor smart client to download. | Solution: The reason we get the File not found error is because under Windows Server 2008, the IIS 7.0 has restrictions on transferring certain files for the Web server. The two files recognized as not found are the new files we added recently to ODM: AtOdmEditor.exe.config and AtOdmEditorOffline.exe.config.
To allow the IIS 7.0 Web server to be able to access the files, we have to go to the following directory, and modify the config file which IIS 7.0 uses globally:
C:\Windows\System32\inetsrv\config\ApplicationHost.config
In there, find where the <requestFiltering> tag is, and modify the <add fileExtension=.config allowed=false /> from false to true, as follows: <add fileExtension=.config allowed=true />
For more details, please refer to the MSDN article:
http://msdn.microsoft.com/en-us/library/ms689460.aspx
Keywords: webserver
References: None |
Problem Statement: Knowledge Base article 125366 shows how to view the Aspen Infoplus.21 ( IP.21) data in Aspen Web.21 through Aspen Operations Domain Model (ODM).
This KB article provides a description of an advanced, yet easy to implement, feature of the integration between the ODM/IP.21 and Web.21. | Solution: Web.21 graphics support the building of a template for simplifying the construction of multiple complex graphics that contain repeated entities like reactors, pumps, motors etc. The templates work by specifying one or more replaceable parameters that you can use in scripts and/or tag name. For example, let's consider a batch reactor. Here is what the template looks like:
All the data that will be depicted on this template will come from an item definition in the ODM that has a particular class associated with it. For example, here is the definition in Web.21 of one of the points above:
The %base% is the default replaceable parameter that is defined when you place an instance of this template on a graphic. You can see from the definition that this parameter is the name of the item in the ODM. The part to the right of the @ character is the class and class attribute that is being displayed.
This component is described in the ODM by the following class:
All the attributes with the small yellow square with the red dot as part of their icons are dynamic attributes. Their values will come from fields in IP.21. When you are defining a class that can apply to many items though, it would be very convenient to be able to infer the actual tag names for the various attributes via some naming convention that you can build into the class definition so that when you create an item you don' t have to specify the tags for all the attributes as this would be a very tedious and mistake prone process. So, at the class level you can specify the 'Default' values for these attributes (the tags from which the actual data will come) and these can use naming conventions or reference other data within the ODM. For example, the DefaultValue for the 'ProcessStep' attribute is:
{0,IP21,0,string,<tag><Name tok=\><![CDATA[BMS_UNIT_STATUS-\Name\ 1 IP_TREND_VALUE]]></Name><Attribute></Attribute><Source></Source><Map></Map><datatype>string</datatype></tag>}
Everything within the {} characters is basically the tag definition.
The first parameter, 0, is an internal placeholder for the ODM.
The second parameter IP21 is the name of the Dynamic Data Source in the ODM. The ODM supports any process data as data source as well as relational databases and BPC and BPD. The last 3 are beyond the scope of what we are attempting to describe here.
The third parameter, 0, is used for SQL Query data sources and is the ID of the SQL query.
The fourth parameter, string, is the type of data you are asking the ODM to return for this tag definition, valid values are also integer and double.
The fifth parameter is an xml definition of the tag. It needs an enclosing <tag> element.
The name sub element of the <tag> is the tag name. This is often specified in an XML CDATA section as in the above example. The name element also supports a 'tok' attribute. This is the character to use in the tag name to indicate special processing of a section of the tag name between tokens. The special processing is reflected access to the ODM. In the above example, the actual name of the item to which this class is applied will be substituted for the /Name/ section of the tag name. You can use the advanced viewer to determine other available properties.
There are some special things you can do in these sections as well. One of them is access other class attributes. The simplest way of accessing another class attribute is to use the key sequence @../ . @ is shortcut for the ClassValueFind method. For example,
{0,IP21,0,double,<tag><Name tok=\><![CDATA[\@../TemperatureTag\ IP_VALUE]]></Name><Attribute></Attribute><Source></Source><Map></Map><datatype>double</datatype></tag>}
In the above DefaultValue \@../TemperatureTag\ will be replaced with the contents of the TemperatureTag attribute in the same class definition as the attribute being defined. ../.. will look at attributes of the parent class etc.
One other special keyword is the ResolveIdentifier which will basically let you look at another ODM Item by appending it to an attribute that contains a pointer to another ODM identifier. You can, for example, get the name parameter by saying ResolveIdentifier.Name.
Dynamic class attributes (like any class attribute) can also be passed to calculations in the ODM just like any class attribute. The ODM will take care that the dynamic data is obtained before the calculation is called. Since the results of the calculations can be bound to a class attribute these can be directly used in Web.21 as well.
NOTE: Historical data reads using ODM Dynamic Data are not supported in the current (2006.5 and V7.1) releases of aspenONE. There's an enhancement request (CQ00335067) to make it work in a future release.
Keywords: None
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.