question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: If I had installed a new patch for V12.1 PIMS and had to remove and then reinstall the software, I see an error when trying to reinstall the patch as seen in the Patch Log:
2023-06-01 09:44:41.503 +03:00 [Information] TRACE: Startup..
2023-06-01 09:44:41.506 +03:00 [Information] TRACE: Starting EP xml validation.
2023-06-01 09:44:41.507 +03:00 [Information] Validating Product: 'PRFPIMS0014', DisplayName: 'Aspen PIMS'.
2023-06-01 09:44:41.510 +03:00 [Information] TRACE: EP xml successfully validated.
2023-06-01 09:44:41.511 +03:00 [Information] TRACE: Fetched all root directories successfully.
2023-06-01 09:44:41.554 +03:00 [Information] TRACE: Checking if this EP is already installed.
2023-06-01 09:44:41.556 +03:00 [Error] This EP 'Aspen PIMS_V12.1.2.2' cannot be un-installed, as following products are either not installed or has different EP version installed.
Aspen PIMS | Solution: In this case, when you uninstall PIMS software, the backup trace files that are downloaded onto the machine are not removed during the uninstall PIMS software process. You will need to manually remove these files and rerun the patch installer. To remove these files, follow this path: C:\Program Files (x86)\AspenTech\EP_Backup. You need to delete the “Aspen PIMS_V12.1.2.x” folders in there.
Keywords: None
References: None |
Problem Statement: After installing Aspen Unified PIMS, the error message shown in screenshot below is encountered when browsing to it even though all necessary IIS components had been installed.
The line highlighted in red under Config Source may show <anonymousAuthentication enabled=false /> depending on which attribute in IIS is being locked or unlocked.
When browsing to AspenUnified in Internet Information Services (IIS) Manager and double-click on Authentication under IIS, the following error is encountered in IIS Manager. | Solution: This is due to either or both anonymousAuthentication and windowsAuthentication attributes are locked at web server level. Below are the steps to check and unlock the attributes.
Launch Internet Information Services (IIS) Manager.
Highlight web server name under Connections on the left-hand pane. In above screenshot, the web server name is called WIN11VM.
Double-click on Configuration Editor under Management section.
Select system.webServer/security/authentication/anonymousAuthentication from drop-down list beside Section: in middle pane.
Click on Unlock Section under Section in the right-hand pane.
Click on Apply under Actions in the right-hand pane.
Select system.webServer/security/authentication/windowsAuthentication from drop-down list beside Section: in middle pane.
Click on Unlock Section under Section in the right-hand pane.
Click on Apply under Actions in the right-hand pane.
Restart Aspen Unified Agent Supervisor Service in Windows Services.
Refer to attachment How to unlock Windows and Anonymous Authentication for IIS to resolve HTTP Error 500.19.pdf for instructions with screenshots.
Keywords: The requested page cannot be accessed because the related configuration data for the page is invalid.
<windowsAuthentication enabled=true />
<anonymousAuthentication enabled=true />
There was an error while performing this operation.
Filanem: \\?\C:\Program Files\Aspentech\Aspen Unified\Web\web.config
Error: Lock violation
References: None |
Problem Statement: How to configure a boiler unit model in Aspen HYSYS? | Solution: Attached is an example case for a boiler. In this case a Gibbs reactor has been used for the combustion chamber and a heater for the feed water side.
The Gibbs reactor can simulate the combustion reaction between air and fuel quite easily. It does not require to add reactions to the simulation case. It just calculates the flue gas composition and reaction heat by minimizing the Gibbs free energy. Refer knowledge basedSolutions 109815 and 118546 for details on Gibbs reactor.
A cooler unit model (not heater) has been used to connect with the reactor because the duty stream from the Gibbs reactor contains negative value.
A Spreadsheet has been used to calculate the right amount of fuel. Fuel is Adjusted to have an excess O2 in the flue gas of 5% in mass basis.
Keywords: Boiler, Gibbs Reactor
References: None |
Problem Statement: Heat Capacity results for REFPROP change depending on whether NIST or PURExx is selected as the first databank. REFPROP does not have any adjustable parameters, why do the results change? | Solution: Because the basis of ideal gas is different between Aspen Plus and REFPROP, the Aspen Plus ideal gas properties (HIG, SIG, CPIG) are used to adjust the enthalpy and entropy to match the Aspen Plus ideal gas basis. Therefore, the results of H, S, and CP calculated by REFPROP in Aspen depend on Aspen Plus ideal gas model parameters. When the ideal gas model and parameters come from different sources (PURExx and NIST), the results for the thermodynamic properties may differ.
Keywords: None
References: : VSTS 817614 |
Problem Statement: This is a quick guide to add PID Loops in the AspenWatch DataBase and start a scanning on PCWS. | Solution: This procedure enlists the steps to Add PID Loops to AspenWatch. In this example we will AspenWatch Maker as the main interface to work.
1.- The first step is to create a Limited Tab text file that contains the Information of the tags that will required. A good example of this template can be found on C:\ProgramData\AspenTech\APC\Performance Monitor\Tools the file is called Example_MiscPidTags.txt.
The file will ask you for some information about the tags that you want to add. In this example I have added two Temp tags, the only important thing you must remember is that under the column for TagType you have to specify 1, this indicates that the Tag is a PID tag. If you change this value for 0, that will indicate that is a Misc Tag. (the # indicates comment and won’t be parsed to AW, I suggest not modifying anything on the file except for the Tags)
2.- Then go to AspenWatch Maker and click on Import, you will have to specify the Text file Created and the interface will require a couple of details that will be used for the collection as the IO Device the Scan Period and the DCS template.
3.- After the PID loops were created it will be necessary to Configure the Tags, by clicking the Configure Button, the status of the tags should change from Initial to Configure and If no errors are encounter, it will change from CONFIG to ON. If everything is fine the tags will display NORMAL under the Scanning Button.
Scanning PID Tags on PCWS
4.- Once the Tags are ON, they will appear automatically on PCWS, under history and then on PID loops. Go to group configuration and this section you can configure one groups that contains all tags that will be scanning.
PID Group = is the name of the new PID Group
Description = Description of the group
Start Time = Date and hour when the Scanning should Start
End Time = Date and hour when the Scanning should Stop
PID Schedule Recurrence = Period in which the scanning will be repeat starting from the Start time
Then on the Tags List you should add the tags that will be part of the PID Group, from available to Selected (a good suggestion is not to scan more than 100 tags at the same time). In this case the final configuration of the group is as it is shown below:
For all changes to get saved it is required to click on the apply button, the new group should be created with an Idle (inactive) status.
When the Schedule start it will show a Scanning Status
Once it finishes it will go back to Idle and you could be seeing results on the Analysis tab.
if the scanning is Ok, it should start showing the proper results as long as the configuration on AW and the communication to the DCS for those tags are Ok.
Keywords: AspenWatch, PCWS, PID Loops
References: None |
Problem Statement: How to build a tag template in DMC3 Builder? | Solution: Download the attached pdf document which contains a tutorial on how to build a tag template in DMC3 Builder.
Keywords: DMC3
Constrained Identification
Mass balance
References: None |
Problem Statement: How to perform constrained model identification for mass balance type constraints in DMC3 Builder? | Solution: Download the attached pdf document which contains a tutorial on how to perform constrained model identification for mass balance type constraints.
Also attached is a dataset .clc file which was used for the example provided in the tutorial.
Keywords: DMC3
Constrained Identification
Mass balance
References: None |
Problem Statement: This Knowledge Base article provides possible steps to resolve the issue of not able to start data collection of ACO controller from Watch Maker | Solution: A problem can arise where the START button is not starting in Maker to get the ACO controller to collect. Even if the Update” action is conducted, “Play icon” is not indicated next to the controller.
So, there is a work around to run it manually and make it start by setting the current timestamp in InfoPlus.21 Administrator, in the QueryDef record called “controller Tagname” SCHEDULT_TIME field.
The “controller Tag name” record is a scheduled QueryDef that activates the data collection for ACO controllers (RTE controllers work differently).
If the SCHEDULE_TIME field value was '??????????' or old timestamp, by Setting this to the current timestamp, that will allow it to begin scheduling itself. (Follow the format)
Ex.
Keywords: IP.21 Administrator, QueryDef , Schedule_time, ACO controller, Watch Maker, Data Collection
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the issue of having fixed reference time when creating PID group configuration in PCWS, also the correct procedure of setting the TTSS | Solution: What is PID Group Configuration?
PID groups enable automated data collection and PID performance analyses to occur in a scheduled, coordinated manner, so that available system resources are used optimally
How to create a new PID group:
In the top section of the PID Group Configuration page, enter a unique PID Group name and Description.
Select the desired Start Time, End Time, Recurrence Interval, and Period.
Specify the list of PID tags for the group by following these steps:
Select an appropriate Time to Steady-State (TTSS), in minutes, that approximates an estimated settling time for the control loop during nominal operation.
From the Available list, select a PID loop tag, and then click the right arrow button. As a result, the tag is added to the Selected list.
Other options for managing the Selected list of tags include the following:
To remove a tag from the Selected list, select the tag, and then click the left arrow button.
To remove all tags from the Selected list, click the Clear button.
Add no more than a maximum of 100 PID tags to the Selected list.
Note: A maximum of 100 tags per group of PID loops is enforced, so that data requests for the Aspen Watch database and Cim-IO gateway are managed at a level that affords fast scanning for PID loop information.
After completing the selection of PID tags for the group, click Apply. As a result, the newly created group is added to the PID Groups table, located in the bottom section of the page.
Note: You need to first select the loop in the Available list, then select the TTSS from the dropdown list, then click the >> (Add) button.
To modify the TTSS value for an existing configured PID Group:
Select the PID Loop in the Selected: (left) pane, then click the << button to remove the PID Loop in the configured PID Group and restore in the Available: (right) pane. Then re-Add the PID Loop with the appropriate (new) TTSS value to the Selected: pane and Apply changes.
To modify the TTSS value manually from InfoPlus.21:
The PID Group configuration information is saved in the AW_PID GRPDef record. Alternately, the TTSS value can be manually adjusted in the AW_PIDGRPDef record AW_PID_TTSS parameter using the Aspen InfoPlus.21 Administrator.
Also note that the TTSS used in PID Watch is somewhat of a misnomer and is not explicitly used in calculations as a time to steady-state value like you would in DMCplus control - but is rather used to limit the analysis plots and model plot lengths, and simulation plot time horizon. Having stated that however, the TTSS should be specified for a nominal settling time for various DCS control PID Loop types.
Moreover, the configured TTSS will also come into play in the determination of the Performance Index at
Keywords: TTSS,
References: Time value, where this Performance Index result is initially configured at (default value) 25% of the specified TTSS.
The Reference Time should be subsequently adjusted as appropriate, in consideration of the PID Loop type and process dynamics. The calculated Performance Index value at the user-specified Reference Time is indicated on the PID Performance Index plot to provide a loop performance grade at the desired closed-loop rise time. Specifying the appropriate Reference Time allows for consistent evaluation of Performance Index results for all PID Loop types,
for example: where the control performance for a Flow loop configured with Reference Time = 1 minute can be compared similarly with a (slower settling time) Pressure or Level loop configured with Reference time = 10 minutes. The specific PID Loop Reference Time provides an expectation for measuring ideal control performance that can be used in benchmarking and identifying control degradation.
The PID Loop Reference Time can be adjusted independently of the configured TTSS parameter on the PID Loop Analysis web display. The Reference Time value is saved in the PID Loop AW_PIDDef record AW_PCTCPIP parameter. Note that re-configuring a PID Loop in an existing PID Group using the PID Group Configuration web display should not overwrite the specified Reference Time (when not NULL) with the default 25% value for the selected TTSS. |
Problem Statement: This Knowledge Base article illustrate the Anti-windup status indicator and its meaning. | Solution: AWS stands for Anti-Windup Status in reference to a valve position. This is configured in the CCF with the entry AWSCOD.
Anti-Windup Indicator (AWSCOD) - Manipulated variable anti-windup code:
0 (NONE) Can move output in either direction
1 (LOW) Can only move output in a positive direction
2 (HIGH) Can only move output in a negative direction
3 (BOTH) Cannot move output
You may also see AWS in the combined status for MVs in the following states:
AWS Low - Operating to a target limited by a minimum valve position (wound up low)
AWS Hi - Operating to a target limited by a maximum valve position (wound up high)
AWS Blocked - Operating to a target limited by restricted valve movement (blocked)
PCWS messages you may see as follows:
11026 --- AWS limit adjustment
11020 --- AWS = 3 -- No movement
11019 --- Invalid AWS code
You can find the associated messages with the message ID in the configuration files like message.dat and dmcplus.message.config found on the Online server:
C:\ProgramData\AspenTech\APC\Online\cfg
Keywords: AWS, Anti-windup, AWSCOD, PCWS, Combined status
References: None |
Problem Statement: After compiling the Recipe Procedure Logic (RPL), sometimes we will see the error Components not found or not compiled: MIXING_SAMPLING: GML_YIELD_AND_RECONCILIATION.SAMPLING. What is the meaning of it and how to fix it? | Solution: On compiling the RPL on PFC Editor, sometimes we will see the error Components not found or not compiled: MIXING_SAMPLING: GML_YIELD_AND_RECONCILIATION.SAMPLING.
This error means that some components that are being used in this RPL don't have the required Basic Phase Libraries (BPLs). In this case, the missed BPL is GML_YIELD_AND_RECONCILIATION.
To fix this issue, we need to click Back to Code to close the PFC Editor. Then click the Basic Phase Libraries tab to add the required Basic Phase Libraries.
Click Rebuild List after adding the required Basic Phase Libraries to commit the change.
Go back to RPL Data tab, and open the PFC Editor by clicking the Load Designer button.
Click Build and then Compile, now the error message has been eliminated and you will see Design has been correctly coded.
Keywords: Aspen Production Execution Manager (APEM)
Recipe Procedure Logic (RPL)
Components not found or not compiled
References: None |
Problem Statement: On Aspen Production Execution Manager (APEM), how to import an existing Recipe Procedure Logic (RPL) export file to a new RPL? | Solution: On the RPL Management page, after you have created a new RPL and added the required Basic Phase Libraries, click Load Designer to open the PFC Editor.
The PFC Editor will be opened. Then click File and then Import.
Select an existing RPL export file, and click Open design ...
The existing Recipe Procedure Logic (RPL) export file has been imported, and you could modify it on the PFC Editor.
Keywords: Aspen Production Execution Manager (APEM)
Recipe Procedure Logic (RPL)
References: None |
Problem Statement: On Aspen Production Execution Manager (APEM), how to add Basic Phase Libraries (BPLs) to a Recipe Procedure Logic (RPL)? | Solution: After we created a Recipe Procedure Logic on the RPL management page, we need to click the Basic Phase Libraries tab.
Click the + button on the top.
Then pick the required Basic Phase Library, and click OK.
The selected Basic Phase Library will be displayed on the screen. Then click Rebuild List to add its dependent Basic Phase Libraries.
As a result, the selected Basic Phase Library and its dependent Basic Phase Libraries will be displayed on the screen.
Keywords: Aspen Production Execution Manager (APEM)
Basic Phase Libraries (BPLs)
Recipe Procedure Logic (RPL)
References: None |
Problem Statement: This article described an issue related to the numerical precision of the IO Datatypes in DMC3 Builder. Also, it shows a workaround to deal with this type of problem. | Solution: The Datatypes convention uses for DMC3 are the same as the ones use in the InfoPlus 21 Administrator. Some details about the data conventions for IP21 can be found on the following KB article:
https://esupport.aspentech.com/S_Article?id=000062381
This basically note the problem that when single values are converted into double, they get can get fuzz, causing the need to round the values to avoid input errors.
In general terms this a know issue and an expected software behavior. Nevertheless, we can use customer calculation to deal with this issue.
Take the following example scenario.
In this case we set two tags to read from IP21 a value of 1.3, one of tags is SINGLE and the other one is DOUBLE. And when we hit test connection, we got the following values:
The single value shows 1.3 as expected, but the Double shows 1.2999999…, in this case we are using the controller limits for comparation purposes. And as long the HEngLim =< HValLim. The controller is fine. However, for the same example we can make a change of the HOpLim from 1.2 to 1.3 and we can get this kind of errors:
The display seems to be comparing 1.3 vs 1.3 from both limits and state that one is bigger than the other (which could be and odd comparison). But in fact, it will be comparing HOPLIM = 1.3 vs
HENGLIM=1.2999999…, and this would trigger the error as the Operator Lim cannot be higher than the Engineering one. We can check this by taking a snapshot of the controller and then going to Calculations > User Entries > Look for the parameter.
To Avoid this kind of problem we can set up a customer calculation that round the values for the entry. Input Calculation takes place before the APC Engine starts.
In this case we can use the ROUND() function to overcome the problem. The following calculation is simple of example on how to deal with this problem.
The ROUND() function requires just to parameters. The value that will be rounded and the precision require:
After the Controller get deploy with the calculation apply, we won’t see the error again:
A more general CalculationSolution would be the following:
Create a user entry for numerical precision. Set that value to the numerical precision of the PCWS before deploying the application. You don't want to deploy with the application running and have engineering or validity limits reset by the calculation by using too small of a precision.
The precision can be set in the simulation under the General Application Details. By default, the PCWS usually shows 3 values after the decimal point, so you could set it to 3. Then if needed it can be changed online.
Then create two input calculations, one for independents and one for dependents using the wildcards to round the appropriate limits to the required precision.
Keywords: DMC3Builder, IO Datatypes, PCWS
References: None |
Problem Statement: Starting in V12.1, the Production Control Web Server (PCWS) hides the operator limit when the rank is not important (rank 9999). This value is still available on the engineer view. | Solution: Limits with Rank 9999 are not used by the controller, if the user still wants to show these values on the web page then there are 2 files that need to be edited under C:\ProgramData\AspenTech\APC\Web Server\ACOView\view folder. Open Notepad as administrator and edit the following files:
RTEoperations.xsl on line 356 should show:
<xsl:when test=(@class='hidden' or @class='hidden-edit') and /data[@view='zzzoperations']>
Operations.xsl on line 966 should show:
<xsl:when test=(@class='hidden' or @class='hidden-edit') and /data[@view='zzzoperations']>
There is no need to restart the APC Web provider data service to see the change. Once the 2 files are changed the operator view will show the limits.
Follow Up Request: How to change the color of these ranks from light grey to being more visible blue?
Here's how you can hack the colors on the web page, but please note that applying a patch or upgrade later on might override these files so you might have to do it again:
For Light Background on the PCWS (setting under Preferences tab):
1. Open this file to edit: C:\inetpub\wwwroot\AspenTech\ACOView\css\AspenCUILight.css
2. Scroll down to line 131 and 133 and change the color here in the code to blue.
Original code line 131:
.hidden {color:#BDBDBD}
Original code line 133:
.hidden-edit {cursor:pointer; color:#ccccff;}
Updated code line 131:
.hidden {color:blue}
Updated code line 133:
.hidden-edit {cursor:pointer; color:blue;}
3. Then when opening the PCWS web page, click CTRL-F5. After the page refreshes, you should see the change in effect.
For Dark Background on the PCWS:
1. Open this file to edit: C:\inetpub\wwwroot\AspenTech\ACOView\css\AspenCUIDark.css
2. Similar to above, you want to change the same code and this time use the color aqua if you want it exactly the same color as the other limits that are not rank 9999:
Line 246: .hidden {color:aqua}
Line 248: .hidden-edit {cursor:pointer; color:aqua}
You can also set it to blue instead if you want it visible but distinguishably different than the other limits. It would look like this:
3. Remember to refresh the web page using CTR-F5.
Keywords: PCWS, Rank, 9999, hidden, limits, colors, change, visible
References: None |
Problem Statement: When the Aspen Mtell server restarts, the MAM services expect to have RabbitMQ up and running beforehand which may not necessarily happen in order. To fix this, we can create a scheduled task to restart the MAM services manually. | Solution: To verify that the MAM services are stopped we need to open the task manager and look for the APMDataCollector and APMDataTransformer services. We can confirm this is the issue if any of these two services are stopped.
Optionally when you open the dataprovider path https://{server_path}/dataprovider you can see this.
To fix this we can add a scheduled task.
1. On the Start Menu type Task Scheduler
2. Right click Task Scheduler(local)
3. Create Task > Fill in the fields as shown below
4. Move to triggers and click new, then apply the same setting as shown below
5. Move to Actions and click New.
6. Add NET in Program/script and start “APMDataCollector”, then click OK
7. Add NET in Program/script and start “APMDataTransformer”, then click OK
8. Add IISRESET in Program/script then click OK
8. Then click OK again
9. Now you will see the task in the Task Scheduler
To test you can right click and run the task manually, after doing that the services should be running correctly. And after any system restarts in the future, the MAM services should start automatically.
Keywords: Mtell Alert Manager (MAM)
Services stopped
task scheduler
References: None |
Problem Statement: When attempting to start multiple configurations in Process Pulse, the process fails and outputs the following errors.
The Process Pulse logs can be obtained by following this KB article: https://esupport.aspentech.com/S_Article?id=000099565.
This article describes the | Solution: if you observe the errors located in the following logs:
Example Enterprise Process Pulse Data.####.log:
Error,2022-12-12 09:51:33.7696,78,CAMO.EPP.Interfaces.Connection.ConnectionObject`1.Connect, System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at net.pipe://localhost/UnscramblerProcessPulseApplicationService/Engine/Configuration_10_10/service that could accept the message. This is often caused by an incorrect address or SOAP action.
Example Enterprise Process Pulse_Error.log:
Error,2022-12-04 19:55:07.5081,20,CAMO.EPP.Interfaces.Connection.ConnectionObject`1.Connect, System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://localhost:8010/. The connection attempt lasted for a time span of 00:00:02.0549596. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8010. ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:8010Solution
Check that Net.Tcp Port Sharing Service is running and starts automatically:
Open Services (Start > Search for Services)
Open the properties for Net.Tcp Port Sharing Service and select Automatic from the startup list
Check if Net.Pipe Listener Adapter service is running. This is used by WAS (Windows Process Activation Service)
Right-click > Start, if the process is not running
Restart IIS
If Net.Pipe Listener Adapter service is not shown in the Services window, this is an indication that the necessary prerequisites have not been installed. Open Server Manager and make sure that the features under the .NET Framework (shown below) are installed, particularly WCF services.
Keywords: Process Pulse
EndpointNotFoundException
pipe error
References: None |
Problem Statement: When any agent, even one agent, has cumulative alarm duration or CAD enabled, this can cause a huge spike in initialization time (observed through runtime statistics), which can cause missed processing cycles if the time intervals are small enough.
For one customer, their agents were on a 2 minute cycle, but the CAD query alone was taking 45-90 seconds to initialize. After applying the | Solution: described below, the query time shrank to less than 1 second.
This issue has been observed by early adopters of Mtell. The early versions of Mtell have different database settings compared to modern versions, and in particular, the CBM.ProfileExecutionHistory table can have particularly significant discrepancies, which is the table CAD agents continuously query. This defect likely affects only a small portion of historical customers.
Solution
By deleting duplicate rows, rebuilding the indexes and keys for the CBM.ProfileExecutionHistory table, and resorting, this can significantly increase the performance of the stored procedure. Follow the steps shown below to apply theSolution. The scripts are attached to this article and a database backup is always recommended prior to running these scripts.
Open SSMS and navigate to the Mtell Databases folder
Stop agent services
Run “Script to test stored procedure query performance.sql” to observe speeds
Run “Script to delete repeated rows.sql”
Delete Key: Right-click CBM.ProfileExecutionHistory -> Keys -> FK_ProfileExecutionHistory_Profile
Delete Index: Right-click CBM.ProfileExecutionHistory -> Indexes -> PK_ProfileExecutionHistory
Run “Script to alter the ProfileExecutionHistory.sql”
Run “Script to test stored procedure query performance.sql” to check if query time has improved
Keywords: None
References: None |
Problem Statement: Not able to see the Sentinel RMS License Manager folder in the path “C:\Program Files (x86)\Common Files\SafeNet Sentinel” after the SLM installation is completed? | Solution: In some of the SLM installation even though the installation gets completed successfully the user might not be able to see the Sentinel RMS License Manager folder after completing the installation.
If the SafeNet Sentinel folder is not available, proceed to install manually by following the below steps,
Go to the aspenone media and locate the following folder,
“: \ aspenONEMedia\aspenonesuite\Sentinel RMS License Manager Installer”.
Run the setup.exe and complete the installation,
Once the installation is completed successfully, the Sentinel RMS License Manager will be available in the SafeNet Sentinel folder.
Keywords: SLM
SafeNet Sentinel
Sentinel RMS License Manager
References: None |
Problem Statement: What are the best practices to efficiently query a large amount of data from Aspen InfoPlus.21 (IP.21)? How often can we duplicate the TSKs (sqlplus_server.exe) to improve the performance - are there restrictions or possible locks on the database? Does it make sense to split the TSKs on different virtual CPUs using the affinity in the IP.21 Manager? | Solution: The reason we recommend running IP.21 processes on the same CPU is that the Windows locking mechanism works most efficiently when processes performing locks are on the same CPU. When you access information from the IP.21 database (whether reading or writing), you lock the database for the duration of the transaction. Please look at KB 000049847 - Utility that measures the time needed to perform locks against the Aspen IP.21 database
If an application is compute-intensive and accesses IP.21 history frequently, then it makes sense to spread processing to all CPUs.For example, TSK_APEX_SERVER opens a thread to each APEX client. If you lock TSK_APEX_SERVER to a single CPU, you lock all the threads to the same processor, and, if you set the affinity settings for TSK_APEX_SERVER to all CPUs, then the threads likewise are able to use all CPUs. When you plot data, APEX divides the plot into 199 plot buckets. (See KB 000068604, How does the Aspen Process Explorer Best Fit Algorithm work?).
For each plot bucket, TSK_APEX_SERVER must find the first, last, maximum, and minimum values. The longer the time span of the plot, the more data TSK_APEX_SERVER must search to form the plot buckets. TSK_APEX_SERVER does not access the IP.21 database much, but it does access history file sets, and TSK_APEX_SERVER is compute-intensive.
The recommendation is to set TSK_DEFAULT_SERVER, TSK_ORIG_SERVER, TSK_APEX_SERVER, TSK_EXCEL_SERVER, TSK_ADMIN_SERVER, and TSK_BATCH21_SERVER to use all CPUs.
IP.21 has a helpful utility to monitor database locks - C:\Program Files\AspenTech\IP.21\db21\code\InfoPlus21LockMetrics.exe.
Open the program, click on Configuration in the upper left corner, and select Start Collecting. Then sort by Avg Locks/S. This will show the processes that are most frequently accessing the database. These processes should be placed on the same CPU. Other interesting columns are Last # Locks and Total # Locks.
The other tool is Windows Task Manager. Sort by CPU time to find the most compute-intensive processes and see if there is a correlation between what you see in InfoPlus21LockMetrics. If an IQ task is consuming a lot of CPU time but has low lock metrics totals, it makes sense to set the affinity settings for the IQ task to all CPUs.
If an IQ task is consuming system resources, then use the Query->Monitor facility in the SQLplus Query Writer to determine the query being activated. A poorly written query can kill system performance no matter how powerful the CPU is. We suggest reviewing KB 000050328 - How to write efficient Aspen SQLplus queries
ODBC connections to IP.21 are handled by TSK_SQL_SERVER. You can spread the ODBC processing load by creating multiple copies of TSK_SQL_SERVER and having the ODBC link on the client computers connect to different copies of TSK_SQL_SERVER by altering the port number used to connect to IP.21.
We hope these recommendations are useful and in case you need further assistance, please get in touch with AspenTech Support Team.
Keywords: Slow
Configuration
improvement
Faster
References: None |
Problem Statement: How do I configure my system so I can view PI data in Aspen Process Explorer?
What are the steps to install and configure ADSA components for getting process data from PI? | Solution: An ADSA Service Component called Aspen Process Data for OSI PI is available to view PI data in Process Explorer plots. ThisSolution explains how to install it.
Before installing Aspen Process Data (PI) , you must have the PI-API installed. Most PI client applications install this for you.
The PI-API DLLs must reside where the PATH environment variable points in order for the DLLs to register. Usually all PI applications are installed in a folder called PIPC and the DLLs are located under that path.
Directions for setting up PI
Place the pipc.ini file in the windows directory (C:\Windows)
Unzip pipc.zip into the C:\ directory. This will create the directory c:\PIPC with several subdirectories. PIPc is the PI home directory
From C:\PIPC\LIBRARY, copy piapi32.dll and pilog32.dll into the %systemroot%\SysWoW64 directory (typically: C:\Windows\SysWOW64)
Modify the file c:\PIPC\DAT\pilogin.ini to point to the correct PI server location OR copy the Two_PILOGIN.ini file to the DAT subdirectory under c:\PIPC and rename it PILOGIN.INI.
In C:\PIPC\bin, run the test app: apisnap.exe
If it asks for a tagname, then you have connectivity to the PI database!
To make sure PI's generating data, use the tagname: SINUSOID
Install Aspen Process Data (PI) (it is installed with the MES Desktop Tools)
Test PI in AspenTech:
Create a User Data Source in ADSA Client Config Tool and add Aspen Process Data (PI) as the Aspen ADSA Service Component.
Add any other necessary services to the data source. You will be asked to enter the PI server host name, the port number (usually 5450 for PI3 servers).
Select the checkbox Use default PI Client Configuration. It will read the information in PILOGIN.INI.and use them to connect to the PI server.
Open Aspen Tag Browser, select the PI Data source, and search for all tags. Should return some tags.
Drag one of those tags into Aspen Process Explorer. You should see a trend.
There is an 'Advanced' button on the service configuration that invokes a standard PI server configuration dialog where you can test that a connection can be made.
Troubleshooting
If, for some reason the PI data does not come in, try to un-register and then register the AtPDPI.dll again which, by default, is located in C:\Program Files (x86)\AspenTech\ProcessData
For that open the DOS command prompt (Run as Administrator):
To unregister the dll:
%systemroot%\SysWoW64\regsvr32 -u C:\Program Files (x86)\AspenTech\ProcessData\AtPDPI.dll
To register the dll:
%systemroot%\SysWoW64\regsvr32 C:\Program Files (x86)\AspenTech\ProcessData\AtPDPI.dll
Keywords: APEx
108233-2
References: None |
Problem Statement: How to resolve the DB Write Error (Gen) for the controllers in the Aspen Watch? | Solution: In the Aspen Watch due sometimes the user may get the DB Writer Error (Gen) for the controllers and because of the error the controller data collection could not be started.
To resolve this error,
Open the Aspen Watch and select the control.
Click the Actions->Update option to update the controller.
After successfully updating the controller, start the controller data collection.
Now the controller will start the data collection and status will be changed to “Success”
Keywords: Aspen Watch
DMC Plus Controller
Data Collection
References: None |
Problem Statement: The version 2006.5 Aspen InfoPlus.21 Release notes have a section that reads:
Consolidation of Multiple Tasks into TSK_DBCLOCK
The following tasks have been subsumed into TSK_DBCLOCK:
TSK_C21_WIN_INIT
TSK_H21_INIT
TSK_H21_ARCCK
TSK_H21_MNTTAB
LOADDB
Furthermore, TSK_H21_PRIME now starts after TSK_DBCLOCK.
These changes were implemented as part of the enhancement to eliminate
history synchronization after a graceful shutdown.
This | Solution: provides more detail as to the specific steps, and the order in which, TSK_DBCLOCK now operates.Solution
TSK_DBCLOCK performs the following steps at startup:
Read the input command line parameters.
TSK_DBCLOCK accepts the following command line arguments:
DOUBLE: this causes dbclock.exe to allocate twice the specified amount of memory in order to minimize database lock time during snapshot saves by performing a memory to memory save of the in memory database followed by a background save to disk.
Number of (2 byte) words of memory to allocate for the database to use, also known as the current maximum database size.
SNAPSHOT: this specifies the path and file name of the snapshot file to load, if not otherwise specified in the snapshotlist.config file.
NOHIS: this disables the entire historian. Real-time data values are NOT written to the permanent history files. (NOT RECOMMENDED TO USE..)
VERBOSE: this argument directs TSK_DBCLOCK to output information regarding the repositories and file sets from the $h21\data\config.dat file during its startup. (This is the equivalent of the -d option in TSK_H21_INIT in versions prior to v2006.5).
The VERBOSE parameter is used to diagnose if TSK_DBCLOCK fails to load repositories and file-sets from the config.dat file.
(NOTE this Verbose switch only became available in Patch #5 -Solution 124278 )
SYNC: this argument directs TSK_DBCLOCK to synchronize the history repeat areas for every records in the loaded snapshot with the history file sets/caches even though their timestamps are matching.
(NOTE this Sync switch only became available in Patch #5 -Solution 124278 )
Create the database shared-memory.
TSK_DBCLOCK creates the database shared-memory based on the size specified in the command line parameters.
Create the subscription shared-memory.
TSK_DBCLOCK creates the subscription shared-memory based on the configuration file group200\dbclock.config.
Create the history administration shared-memory.
TSK_DBLOCK creates the history administration shared-memory based on the registry value \\HKLM\Software\AspenTech\InfoPlus.21\version\group200\IP21HISTADMIN_SIZE_IN_KB
Note that this is not the history shared-memory.
Load the snapshot
TSK_DBCLOCK loads the defined snapshot into the database shared-memory.
Create the history shared-memory and load repositories and file-sets information.
TSK_DBCLOCK creates the history shared-memory based on the information defined in the config.dat file. Then it loads repositories and file-sets information defined in the config.dat file into the history shared-memory.
Do archive check for all file-sets.
TSK_DBCLOCK loops through all repositories to do an archive check for every file-set in the same order defined in the config.dat file. TSK_DBLOCK will terminate on the first failure of an archive check for a file-set. Use the VERBOSE command line argument if you need to see which file-set TSK_DBLOCK fails on. When TSK_DBCLOCK fails to archive-check a file-set, you can use the h21arcckwizard.exe to fix that file-set as follow:
Open a DOS-prompt.
CD to Program Files\AspenTech\InfoPlus.21\c21\h21\bin
Run h21init.
Leave the DOS-prompt open
From Windows explorer run
..\AspenTech\InfoPlus.21\c21\h21\bin\h21arcckwizard.exe
to fix the file-set(s).
Ctrl-C on the DOS-prompt to terminate the h21init.
Synchronize the history repeat areas for every record in the loaded snapshot with the history file sets/caches.
TSK_DBCLOCK finds out the saved time of the loaded snapshot.
TSK_DBCLOCK finds out the saved time of every cache.dat for every repository.
TSK_DBCLOCK performs the synchronization if one of more of the times is different.
TSK_DBCLOCK performs the synchronization if the SYNC command line options is specified regardless of the saved times.
Check out the required license.
TSK_DBCLOCK periodically checks and validates the two licenses SLM_InfoPlus21 and SLM_InfoPlus21_Points.
Writes to both the DBCLOCK.OUT and DBCLOCK.ERR files
TSK_DBCLOCK now writes messages to both the OUT and ERR files.
Usually informational messages are seen in the OUT file, with 'failure' type messages being written to the ERR file.
Keywords: None
References: None |
Problem Statement: Users have reported issues when using Aspen Engineering Suite tools and saving files to shared drives, specifically Microsoft OneDrive. We have had defects reported against this for Aspen Simulation Workbook (pointing to files on OneDrive) and with Aspen Plus crashing non-reproducibly. | Solution: One of the issues with synchronization is that Aspen Plus generates many temporary files and issues can occur when these files are locked.
We recommend not saving to automatically synchronized drives. An option is to set a task for OneDrive to synchronize these folders once a day during an off time.
Keywords: None
References: None |
Problem Statement: This article described what are the main requirements for a GDOT Service Account | Solution: According to the GDOT User Guide there a couple major requirements that the Service Account should meet:
1.- All Aspen GDOT application processes must run in the context of a specific user; we refer to that account as a “Service Account”. If multiple services accounts are used, you may also want to create a user group, with the service accounts as members, to simplify the configuration of security settings associated with the GDOT applications.
2.- Because we do not want GDOT applications to terminate due to password expiration, these service accounts are usually configured with non-expiring passwords.
3.- Create the Aspen GDOT service account (e.g., it could be named “GDOTLauncher”), following this guidance:
This account should be a domain account for computers configured as members of a domain, or a local account for Workgroup or stand-alone computers.
Password should be set to never expire
The service account must be added to the local Administrators group (this is not required for machines with only GDOT Console installed).
Make sure to add the account to one or more of the GDOT user groups
4.- The user account name and password can be adjusted to adhere to any site-specific IM requirements and specifications, as long as the username and password are recorded for later DCOM registration of GDOT applications.
5.- When GDOT applications are connected to an OPC server that runs under a certain account on another machine and these machines are Workgroup computers (not Domain computers) using local user accounts, then it will be required to create an account with the same name and password, on the Aspen GDOT machine, and that account has to be added to one of the Aspen GDOT user groups
6.- The account used by the GDOT apps must remain logged in to the system (typically logged in through an RDP session). This is because the DCOM configurations of the applications expect that the user is logged in interactively, and mostly because Excel expects that the user profile is loaded on the system. If the user profile is not loaded, then excel quickly throws an error after the workbook is opened and Model Update will terminate.
7. The need for Administrator rights. This is primarily due to file and directory access permissions.
Keywords: GDOT, Users Account
References: None |
Problem Statement: This article described in a couple of steps how to use the Scheduling function from the Test Agent for the Auto generation of Identification Cases. | Solution: The Test Agent feature is a utility that can be used for many purposes during the Step Testing. However, on thisSolution we will just frame the Case Identification feature.
Test Agent has two requirements to work:
AspenWatch should be collecting the Controller Data
The Controller should be on Calibrate or Test Agent
Once this is set up properly, you should be the Test Agent Panel on the right top corner of PCWS. Initially, Test Agent will appear OFF but if click on it and select Enable Test Agent Schedule, Test Agent will start working.
To Access the modeling tool, you can do it in two different ways:
1.- You can go to Test Agent > Click on the Secondary switch (Normally show Slice) > Click Advance
2.- on PCWS go to Model >> Click the Modeling Button
The main will show two options Identification and Model Quality. In the case of the set up and the automatic scheduling both work in a similar way, but for this article we will just check on the Identification case.
The set up an ID case, make sure that the correction option is Selected and then click on New
The window information will show some information that will need to be use for the case to run. Most of the information is similar to a what standard DMC3 case will require (Case Name, Ind variable, Dep Variable,etc.,…) More information about the general case information can be consulted on the help file.
To set up Automatic Case ID we will use the Scheduling function. First, make sure to select the Schedule Case to Run Checkbox.
When the Checkbox is selected, we will see a some parameters that can be used.
Schedule Start is the Starting time when the Automatic ID will be generated. Normally you will prefer the current time of a future time.
The Interval is the Time interval when the Case will be Identified. For example, you can set up the interval to have a new Identification every hour o every two hours, etc. The Max interval is up to 1 month and the Min is 1 hour.
The Repeat Parameter control how many time the ID routine will executed. The Minim is 1 time the Max is to Repeat Forever. This parameter Is really important because it will control how many Cases you will have Identified at the end (this is control by the Repeat action not the Interval).
For Example, if you Set up the Interval to 1 Hour and the Repeat to 1 and the Start Time is 10:00 AM. A New Case will be Identified at 11:00 AM (as per the Interval), but due that the Repeat is set to 1 you won’t get a new Identified case on the next times. In case the Repeat change to 2, you will a new case at 11:00 AM and the another at 12:00 PM, then it will stop the Identification.
To avoid that problem and constantly create cases you can set up the Repeat option to Forever and the Cases will get created constantly as far the Step Testing is ongoing. Then you can go back to the case, edit the case, and just uncheck the Schedule case to run or change the repeat to stop the automatic case creation.
Keywords: Test Agent, PCWS, Calibrate
References: None |
Problem Statement: This Tutorial Show Step by Step the creation of a Controller Custom AspenWatch Report | Solution: Aspen Watch Maker is a powerful tool that allows to Collect Controller information and use that information for multiple purposes. One of the interesting functions that can be use is to create Custom Controller Reports. This Reports have the capability to show different aspects of the Controller Performance as use of the Controller, MV use, CV use, KPI, etc.
In this Tutorial we will explain the Steps to create a Custom Report for a Simple Fractionator Controller Example.
ThisSolution contains a Step by Step PDF Tutorial showing the steps to replicate for any Web System
Keywords: AspenWatch, Reports, PCWS
References: None |
Problem Statement: How to create an Objective function in Aspen HYSYS Equation Oriented? | Solution: Before running a simulation in Aspen HYSYS EO in Optimization or Data Reconciliation run modes, users need to create an objective function. This article goes over the steps to create an objective function.
In the main EO subflowsheet, go to EO Configuration group, click on Objectives.
Click Add.
Provide the Name such as PROFIT, select type and specify Direction as Maximize.
Define the terms of the objective function.
Note: The sign of the Cost indicates whether the variable is considered as an income or expense in the objective function.
Key Words:
Aspen HYSYS, Objective Function, Equation Oriented
Keywords: None
References: None |
Problem Statement: Query examples to find a list of tags with similar names. | Solution: Example 1
On this first example, we will use the following set of tags as an example (Example 1, Example 2, Example 3). Notice these tags are defined under IP_AnalogDef.
Open Aspen SQLplus
Write the following script (as shown below):
select name from IP_AnalogDef where name like 'Ex%';
Click the execute button (exclamation mark) or select Query > Execute to execute the query. The result is shown below.
Notice the percentage symbol (%) acts as a wild card. Thus, when executing the query SQLplus will find all tags defined under IP_AnalogDef that begin with Ex.
Example 2
For the next example, we will use the following set of tags (A1Pipe_Test, A1Tube_Test, A1Valve_Test):
Notice all tags end with the word Test.
Open Aspen SQLplus.
Write the following script (as shown below):
select name from IP_AnalogDef where name like '%Test';
Click the execute button (exclamation mark) or select Query > Execute to execute the query. The result is shown below.
We may also use the percentage symbol (%) at the beginning. This will provide all tags ending with the word Test.
Keywords: SQLplus query
List
Name
References: None |
Problem Statement: How do I find and use the ProMV Service Utility? | Solution: The ProMV Service Utility will validate the ProMV Online service and configure connection strings to the SQL database for both ProMV Online Continuous and Batch. The ProMV Service Utility is used to validate and fix any configuration issues in IIS, SQL, and RabbitMQ. The ProMV Service Utility is automatically run during the installation and will configure microservices. In order to diagnose/fix configuration issues for any services, follow the steps below. To run the ProMV Service Utility:
Open Command Prompt (Run as administrator).
Move to the folder location that the ProMV Service Utility is stored in by executing the following command: cd C:\inetpub\wwwroot\AspenTech\AspenProMVUtils\ServiceUtility
Execute the command: AspenTech.ProMV.SvcUtility.exe. To output to a plain-text file for easier reading and submitting the results to Aspen support, execute: AspenTech.ProMV.SvcUtility.exe > filename
Check the output for any errors
(Optional) Run the help parameter of the ProMV Service Utility by executing the following command: AspenTech.ProMV.SvcUtility.exe --help. This will return a list of parameters that you can use to fix configuration issues, change usernames/passwords, and change database connections.
Keywords: ProMV
service utility
troubleshooting
References: None |
Problem Statement: This Knowledge Base article consolidate the different troubleshooting KB article to resolve Add-In Excel error #NAME | Solution: The well-known reason behind the #NAME? error appears in the formula is because there is a typo in the formula name.
The #NAME? error signifies that something needs to be corrected in the syntax, so when you see the error in your formula, see the following KB articles which will help to resolve this issue.
KB: Getting “#NAME?” error while using Aspen Process Data after upgrading Microsoft Office
https://esupport.aspentech.com/S_Article?id=000098124.
KB: What is the best way to prevent the Excel Process Data Add-in from returning the error #NAME?
https://esupport.aspentech.com/S_Article?id=000068284.
KB: How to solve missing 'Aspen Process Data Excel Add-in Functions'?
https://esupport.aspentech.com/S_Article?id=000097449.
KB: When using Excel Process Data COM Add-In, formula result shows #NAME? - how to fix?
https://esupport.aspentech.com/S_Article?id=000065705.
KB: MES Excel add-in reports stop working after Aspen InfoPlus21 is upgrade from older version (eg. V8.0) to V10, with #NAME in all cells
https://esupport.aspentech.com/S_Article?id=000062198.
Keywords: Excel, Add-in, #NAME, typo, Formula bar
References: None |
Problem Statement: What is a difference between DSTWU & ConSep Block? | Solution: This article shall help users to understand the simple differences in using DSTWU & ConSep block.
Also, the purpose of this article is to make users aware on DSTWU & ConSep block applications availability.
DSTWU Distillation Column
DSTWU performs shortcut design calculations for single-feed, two-product distillation columns with a partial or total condenser.
DSTWU assumes constant molal overflow and constant relative volatilities
DSTWU uses this method/correlation To estimate
Winn Minimum number of stages and optimum feed location at total reflux
Underwood Minimum reflux ratio
Gilliland Required reflux ratio and optimum feed location for the specified number of stages, or the required number of stages and optimum feed location for the specified reflux ratio
For the specified recovery of light and heavy key components, DSTWU estimates:
Minimum reflux ratio
Minimum number of theoretical stages
DSTWU then estimates one of the following:
Required reflux ratio for the specified number of theoretical stages
Required number of theoretical stages for the specified reflux ratio
To Specify DSTWU:
Use the Input | Specifications sheet to enter column specifications. The following table shows the specifications and what is calculated based on them:
Specification Result
Recovery of light and heavy key components Minimum reflux ratio and minimum number of theoretical stages
Number of theoretical stages Required reflux ratio
Reflux ratio Required number of theoretical stages
DSTWU also estimates the optimum feed stage location, and the condenser and reboiler duties.
DSTWU can generate an optional table of reflux ratio versus number of stages. Use the Input | Calculation Options sheet to enter specifications for the table.
ConSep Distillation Column
Use ConSep to develop design parameters and perform feasibility studies for distillation columns.
The ConSep block is available for users of Aspen Plus with an Aspen Distillation Synthesis license.
ConSep performs a boundary-value tray-by-tray calculation, starting from both ends of the column. If the design profiles from each end intersect on binary or ternary diagrams, the column is feasible.
Under the ConSep block, Interactive design helps to generate ternary plot containing the residue or distillation curve and the envelope specified on the Specifications sheet. You can change the specifications here and interactively see how the distillation lines change; these lines must cross for a feasible design.
User can manually play with the various values to check how the result like stages varies with respect to changes in input such as recovery or reflux ratio.
The major advantage of the ConSep block is to convert easily in to RadFrac Block to transfer all the results to RadFrac column.
To specify ConSep:
Choose three components on the Specifications sheet. These will be used for the ternary map analysis.
Make three recovery or composition specifications for the outlet streams on mole or mass basis.
Choose valid phases (vapor-liquid or vapor-liquid-liquid)
Specify operating pressure
Specify reflux or reboil ratio
Specify whether to generate a residue or distillation curve.
When using vapor-liquid-liquid calculations, you may specify a distillate decanter and a specification and key component for that specification, and also whether to generate a vapor-liquid-liquid or liquid-liquid envelope.
You can use the Component Map sheet to specify other components (besides the three specified ones) which should be treated like one of the specified ones.
For more details on DSTWU & ConSep, please visit to Aspen Plus Help & refer below articles:
Conceptual Design of Distillation Columns in Aspen Plus
https://esupport.aspentech.com/S_Article?id=000049919
What is the difference between DSTWU, Distl and RadFrac column capabilities?
https://esupport.aspentech.com/S_Article?id=000085929
Column Targeting Part I: Column Design Using DSTWU
https://esupport.aspentech.com/S_Article?id=000056805
Keywords: DSTWU, ConSep
References: None |
Problem Statement: What is a Power stream and how to define the Power stream in Aspen Hysys? | Solution: Power streams are energy streams that account for electrical power. Power streams can be used to model systems involving units such as electrolysis cells, since these streams account for parasitic energy demand in which some of the generated power is used to drive motors to compress hydrogen, oxygen, and/or air and to circulate fluids for cooling or renewal of the cell.
You must specify values for two of the following parameters. The remaining parameter is automatically calculated by Aspen HYSYS.
Voltage [V]: The following equation is used: Voltage = Power ÷ Amperage.
Amperage [A]: The value of the current. The following equation is used: Amperage = Power ÷ Voltage.
Power [kW]: The following equation is used: Power = Voltage × Amperage.
The remaining value is automatically calculated by Aspen HYSYS.
Optionally, specify the Utility Type for the stream. For a Power Stream, only Electricity type utilities are shown in the drop-down list. If a Utility Type is specified, emissions will be calculated and will appear in the Flowsheet Summary.
Keywords: Power streams, Voltage.
References: None |
Problem Statement: Why are certain reactor models such as RCSTR and RPlug unavailable on the model palette sometimes? | Solution: Aspen Plus allows you to switch between two Process Type environments, Batch & Continuous, within same platform. This tool is built in such a way that you can have continuous, batch, or a combination of continuous & batch as well.
When creating a new simulation, if you specifies a Batch Template, then only reactors supported under Batch will be available. As shown below:
If you proceed with a non-batch template option, Aspen Plus assumes Continuous. So all reactors under used in Continuous Mode will be included, such as RCSTR and RPlug.
When using Batch Mode, only these reactors are available to use.
(Note: Batch Option – allows you to use BatchOp Block)
You can easily switch between both the two options through the Batch Tab in the Ribbon Menu. As shown here:
Keywords: Reactors block not available, Rplug, RCSTR not available, Model Palette
References: None |
Problem Statement: Why CAS No. is not available in search option for PubChem components search in Aspen Plus V14? | Solution: Aspen Plus V14 has added the function of searching for components in PubChem, but there is no option to use CAS No. to search for components, and the search results have no CAS number.
The reason CAS No. is not displayed, is because PubChem is a repository and not a fact checker. Instead of listing single valid CAS No. they list every single CAS No. from information sources. There is no way to tell which CAS No. is the most correct, hence we do not display it. However, we do include Open in PubChem option (right-click) so that user can view entire entry on PubChem website to determine if that result is what they intend.
PubChem API does not provide search by CAS No. functionality. It only does a fuzzy search which will search all fields, including CAS No. That's why when you enter 7732-18-5 you will see water at the top:
Keywords: PubChem, CAS No., search using CAS Number.
References: None |
Problem Statement: Where do I get the values for the Heat fluxes in EDR & its information?. | Solution: In Aspen EDR, heat flux is calculated as result: Thermal & Hydraulic Summary à Heat Transfer àMTD&Flux
Below is the information available on the Heat flux –
The overall heat flux across the exchanger is the total duty divided by total area (based on tube OD).
For checking cases, the required area is used.
The highest local heat flux is the largest local heat flux at any point within the exchanger(s).For a liquid being heated, there is a critical heat flux at which as stable boiling situation would break down, the heating surface would be covered in a blanket of vapour, and the heat transfer would be much reduced. The critical heat flux can change from point to point within an exchanger, depending for example of the relative amounts of liquid and vapour present in the bulk fluid.
Since the heat flux also varies from point to point within the exchanger, the situation is rather complicated, so two relevant parameters are given. One is the highest ratio of local heat flux to critical heat flux. If this ratio is below unity, it indicates that critical heat flux is unlikely to be a problem. The critical heat flux at this highest ratio is also given since this is the most important value of the critical heat flux.
It should be remembered that there is a degree of uncertainty in all critical heat flux calculations, so if the ratio of local to critical heat flux is not far below unity, there is still a potential risk.
When critical heat flux is predicted, by default the local heat transfer coefficient is reduced to a lower value appropriate to transfer through a gas film. The coefficient is reduced over a transition region above the critical flux, to represent in some measure the complicated physics associated with boiling breakdown.
Please refer this article as well –
Critical heat flux in flow boiling https://esupport.aspentech.com/S_Article?id=000092400
Keywords: Heat flux, critical heat flux
References: None |
Problem Statement: Best Practices - Recycle loop convergence applications in Aspen Plus | Solution: Purpose of this article is to make user aware on the various options available on the recycle loop convergence.
Aspen Plus users may face convergence issues when user tries to run the simulation with recycle looping & if its multiple recycle streams them too tight to converge the model.
This article is just to have some simple tips to converge the model but there are no limitations on using various other options for convergence. User may find multipleSolutions to converge the problems.
Below are few points:
1. The first most important step is to understand that whenever user develops any recycle simulation, software will develop tear stream for convergence (in simple way it’s in the recycle loop – any stream will be considered as tear stream to converge the model) Aspen Plus mostly select the suitable tear stream itself.
2. If the model does not converge, good to check in control panel (as mentioned in the below snap), control panel shall show the stream considered as tear, its method for convergence. This stream & the method shall be first priority to focus.
3. If the model does not converge with standard/default convergence method or with the tear stream, user can modify both options.
4. Simple way is, try breaking the recycle stream in to two & give some initial value to recycle - breked stream – this will help software to have some initial value to consider & then run the simulation. User can also reconcile the stream (right click on stream to break, reconcile & break) so that software can take some values itself (user shall take care that the values shall not be too high flow, negative flow or temp, etc.). Breaking the stream & then converging the results & once the results are almost matched – could be connected back to get model converged.
5. User shall also check the convergence options available to modify –
6. Tear convergence options like tolerances to tighten to 1e-6 shall also help to converge the model.
7. Flash convergence iterations could also be increased from default 30 to 50 or 100.
8. Default methods could also be modified – by default its Wegstein method, user can try modifying it to Broyden as well.
9. If by breaking the stream & reconnecting does not work, good to change the tear stream with better initial values like temperature or pressure will be fixed in heater block or the pump blocks, in short good initial values will help. User can use outlet stream of heater, pump, valve, duplicator block as good option to consider tear stream.
10. User can use Transfer Block, balance blocks as well to converge the model.
11. User shall make sure that the model trying to converge in in steady state simulation & flow in the recycle stream shall not increase which causes unconverged model. In such case, one can use makeup stream with some calculator blocks or design specs to fix the total flow that will cause some fixed flow to go in the recycle loop. Small purge from the recycle could also help in mass balance convergence issues.
Above are some basic tips to converge or to check while convergence of the model, always good to check Aspen help for more details on convergence & troubleshooting options.
Below articles will also help on convergence applications:
Aspen Plus Convergence Checklist - https://esupport.aspentech.com/S_Article?id=000083932
Keywords: Recycle convergence, Convergence, Tear stream, Reconcile.
References: None |
Problem Statement: How do I print large flowsheet & sections of the flowsheet in Aspen Plus? | Solution: Aspen Plus allow printing of the flowsheet in multiple ways as below.
A. Printing Large Flowsheets
For large flowsheets, it is often necessary to print the flowsheet on multiple pages. You may also want to only print one flowsheet section at a time.
To print on multiple pages:
From the File menu, click Page Setup.
Specify the desired number of horizontal and vertical pages.
From the View tab of the ribbon, in the Show group, click Page Break.
Select the page borders to move the location of the pages, or select a corner to change the size of the pages relative to the flowsheet.
Note: All of the pages must remain equally sized.
You can also move elements of the flowsheet such as the unit operation icons, tables, and annotation and arrange them to fit on a desired page.
B. Printing a Section of Flowsheet
To print a section of flowsheet:
In the Flowsheet Modify tab of the ribbon, in the section group, select the section you want to print from the list, then clear the Show All checkbox.
If necessary, make adjustments to the Page Setup.
In the File menu, click Print.
Choose the printer and desired settings in the Print dialog box.
Click OK.
User can also simply select the flowsheet & paste it in excel & can prepare reports to have all together to print.
Keywords: Large flowsheet print, Section of the flowsheet to print
References: None |
Problem Statement: How to troubleshoot the error “Current license state of ip21 server does not allow this operation” while configuring new tags in Aspen InfoPlus.21
This error indicates that your IP21 is currently in a License Denied state.
You can confirm on this by performing as follow.
1) Launch IP.21 Administrator.
2) Right-click on IP.21 data source name.
3) Select Properties from context menu.
4) Select the tab called License Status
There are a few possible reasons why your IP.21 is started in a License Denied state as below
1. Unable to get the necessary license feature due to license feature does not exist.
2. Maximum point count set for IP.21 exceeded what the license file allows.
3. Insufficient available tokens.
4. License had expired. | Solution: 1. Please ensure that your server on which IP.21 is installed is able to connect to a SLM server with a valid license.
2. If point 1 is checked, then it may also be possible that you had set the maximum point count to be too large, that is, more than the available tokens you have.
In this case, please perform following,
a. Stop IP.21.
b. Launch IP.21 Administrator.
c. Right-click on IP.21 data source name.
d. Select Set Point Count.
e. Enter a point count which is smaller and that you have sufficient available tokens in your license server to support.
Keywords: None
References: None |
Problem Statement: How to Ignore Inlet Pipe Warnings for Ignored Sources in Aspen flare system Analyzer? | Solution: You can Ignore Inlet Pipe Warnings for Ignored Sources in Aspen flare system Analyzer from Calculation Settings.
Open Calculation Settings from the home ribbon and click the warning tab and select Ignore Inlet Pipe Warnings for Ignored Sources option.
Keywords: Warnings, Ignored Sources
References: None |
Problem Statement: How to automatically route electrical Cable trays and Conduits in Aspen OptiPlant 3D layout? | Solution: Cable trays and conduits can be automatically routed in Aspen OptiPlant. The program has developed an option to route the electrical supplies in the plot plan and has an MTO giving the information regarding the same. Electrical routing uses the same fundamental for routing as you have used in case of Pipe Routing. It is not necessary for any conduit to be present, normally, tray’s will be moving above the equipment’s, and cable required by the equipment are directly dropped from the tray to the top of the equipment.
Follow the below steps for electrical routing.
1. Go to Auto Route >> Electrical List.
2. Provide the ECLLS file name.
3. A form to define Units will open, in that form define the units.
4. Now add the lines in the electrical list template.
5. After adding the lines, save the electrical list.
6. Now go to the Routing Configuration and turn on the checkbox to include Electrical.
7. Click on Run Batch.
Keywords: Electrical Routing, OptiPlant, Trays, Conduits
References: https://esupport.aspentech.com/S_Article?id=000099616 |
Problem Statement: How to design modules for the highlighted items in Aspen OptiPlant? | Solution: Modular fabrication and construction offer several advantages over conventional stick built construction. Modular construction also minimizes lay-down space, an important benefit when the field site is small or congested and reduces delays due to adverse weather. The modular construction technique is applicable to almost any project. However, one of the main aspects that comes into play for modularization is that the Engineering and design must be executed earlier in the project.
Aspen OptiPlant helps achieve this goal earlier in the project where designers can design and optimize a layout by considering the design requirements for modular construction and helps validate and optimize the modules designed.
Fence or select the items.
Then, select the menu Modularization >> Design Module >> fill in the required values as per your need.
Next, under the Structures list box, keep only the following options toggled ON - Frame1-2, Frame 2-3 & Frame 3-4 under Level/Bay1, Level2/Bay1 and Level3/Bay1. Rest all should be toggled off.
Next, click on ‘Calculate Pipe Weight’ button.
Next click on COG button to calculate and display the COG location on screen.
Click on save and validate button to generate the report on this designed module.
Click on Close the report. The CSV report has been saved in the project folder within the Modules folder.
Keywords: Modularization, Fence, OptiPlant, COG, Pipe weight, Structures
References: None |
Problem Statement: How to use API 660 Table 2 values for external nozzle load calculations in Aspen Shell & Tube Mechanical V14? | Solution: From V14 version you can use API 660 Table 2 values for external nozzle load calculations.
When Yes is selected, from the Input | Program Options | Loads-Ext/Wind/Seismic | Ext. Loads. The program will use the nozzle loads and moments from API 660 Table 2 to perform external nozzle load calculations such as UG-44(b) and WRC-107 without doing a full API 660 calculation. This will override any user input individual nozzle load or moment values in the Input | Exchanger Geometry | Nozzles-Details-Ext.Loads | External Loads tab.
Keywords: API 660 Table 2, External nozzle load calculations.
References: None |
Problem Statement: Relational Database Error -2147467259: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified.
Due to Microsoft policies, the information in Advior.udl file is not always saved in the same location. So sometimes you have to copy it from virtual store directory. | Solution: The following additional steps needs to be performed for 64 Excel Add in Configuration
1. Open ODBC Data Sources (64-bit) to add a new data source as below.
2. Configure Advisor.udl under:
C:\ProgramData\AspenTech\Advisor Under the Provider tab,
Select Microsoft OLE DB Provider for ODBC Drivers.
Under the Connection tab, select the ODBC data source (64-bit) that you configured from the Use data source name drop-down list. Enter information to connect to the selected database and test your connection before clicking OK to save.
3. Use Notepad to open Advisor.udl and check your information is correctly saved.
[oledb]
; Everything after this line is an OLE DB initstring
Provider=MSDASQL.1;Password=adv#password;Persist Security
Info=True;User ID=advisor3;Data Source=Excel64;Initial
Catalog=DemoDatabase
4. If your information is not correctly saved, you must open and
configure Advisor.udl
under C:\Users\<username>\AppData\Local\VirtualStore\ProgramData\Aspe
nTech\Advisor as the previous step.
5. Copy Advisor.udl from
C:\Users\<username>\AppData\Local\VirtualStore\ProgramData\Aspe
nTech\Advisor to replace Advisor.udl under
C:\ProgramData\AspenTech\Advisor
Note: The Excel add-in XLA file is located under: C:\Program Files x86)\AspenTech\ExcelAddins.
Keywords: Run-time error ‘-2147467259 (80004005)
Data Source name not found
no default driver specified
64Bit Excel addin
Advisor Excel addin
References: None |
Problem Statement: How to connect process stream to convection bank after leaving firebox in Aspen Fired Heater V14. | Solution: From V14 version you can select the process stream entering convection bank after leaving firebox from the Input | Heat Geometry | Convection Banks | Layout tab and view the connection diagram in the Connection Diagram tab.
Keywords: Convection bank, firebox.
References: None |
Problem Statement: Aspen EDR V14 integration with 32-bit Aspen Plus and Aspen Properties. | Solution: Aspen EDR V14 is 64-bit and must be used with 64-bit Aspen Plus and Aspen Properties. EDR V14 can use Aspen Properties V11, V12 or V12.1, or V14 depending on which version is registered. EDR V14 will automatically use the registered 64-bit of Aspen Properties if an earlier 32-bit version of Aspen Properties is registered.
EDR V14 applications can only import from 64-bit versions of Aspen Plus: V11, V12, V12.1, and V14. If you try to import from a 32-bit version of Aspen Plus, you will get an error that EDR is unable to create the Aspen Plus data extraction component. Because of differences in architectures, EDR V14 can import from any version of Aspen HYSYS, including 32-bit versions (V8.8, V9.0, V10).
Keywords: EDR, 64-bit and 32-bit
References: None |
Problem Statement: How to specify individual Minimum bolt area ratio AB/AM for each flange in Aspen Shell & Tube Mechanical V14? | Solution: A new input has been added to the Input | Exchanger Geometry | Body Flanges | Dimensions tab to allow you to specify individual minimum bolt area ratios for each flange. This is the ratio of the actual bolt area (AB) to the required bolt area (AM). The minimum value is 1. Specifying a larger value than this will result in a design that has a larger than required available bolt area. If different values are specified for flanges that share the same bolting, the larger value will be used for both flanges. These values of AB/AM for the individual flanges will overwrite the value specified in the Input | Exchanger Geometry | Body Flanges | Options tab.
In the code calculation output, the final value of AB/AM can be found in the Results | Code Calculations | Body Flanges page.
Keywords: Minimum bolt area ratio AB/AM, Body Flanges.
References: None |
Problem Statement: How to specify Sources by Pressure and Vapor Fraction for Valves in Aspen Flare System Analyzer V14? | Solution: In previous versions of Aspen Flare System Analyzer, source conditions for Valves could only be specified based on pressure and temperature. This caused problems for two-phase sources that were narrow-boiling or single-component because the temperature specification could not adequately describe the fluid state.
In V14, you can specify the source conditions based on pressure and vapor fraction by selecting Pres./Vap. Frac. Spec from the new Inlet Specification Type drop-down list on the Conditions tab of the Relief Valve Editor and Control Valve Editor. The Inlet Vapour fraction field allows you to specify the vapor fraction of the source on the upstream side of the relief valve.
When importing sources from the Safety Analysis environment in Aspen HYSYS or Aspen Plus, Aspen Flare System Analyzer sets specifications to match usage within the Safety Analysis environment. For example, if a vapor fraction is specified in the Safety Analysis environment, then Pres./Vap. Frac. Spec is selected, and the Inlet Vapour fraction uses the specified value.
Keywords: Sources, Vapor Fraction
References: None |
Problem Statement: How to use Aspen Properties in Aspen Flare System Analyzer V14? | Solution: Aspen Properties is an available Property Source for calculations. To change the Properties Source in an existing file, open Calculation Settings from the home ribbon and click the Methods tab. In the drop-down menu, select Aspen Properties. The Calculation Settings Editor dialog box changes to reflect the change in Properties Source. This allows you to change several settings related to Aspen Properties:
Overall Model: You can change the Overall Model by selecting options in the drop down menu. The following model options are supported:
Peng-Robinson
Soave-Redlich-Kwong
PC-SAFT
Cubic plus association
NRTL
Ideal gas/idealSolution
IAPWS-IF97 steam tables
National Board steam tables
Phases: You can specify the Phases in the model by using the drop-down menu. The default option is Two-phase, but there are the following options:
Single-phase (Vapor)
Single-phase (Liquid)
Two-phase
Two-phase (Liquid-Liquid)
Three-phase
Flash Method, Flash Tolerance, and Flash Max Iterations: These options allow you to further customize your calculations, though they are not typically changed. If you find your flashes are not converging, you can try changing your Max Iterations.
Diagnostic Level: These options allow you to select what logged in the history file. The history file can be found in AppData\Local\AspenTech\Aspen Flare System Analyzer V14.0
Keywords: Aspen Properties.
References: None |
Problem Statement: How to select Churchill-Bernstein method as the External Convection Model for forced convection heat transfer calculations. | Solution: In previous versions, ESDU was always used as the External Convection Model for forced convection heat transfer calculations. In V14, Aspen Flare System Analyzer also offers the Churchill-Bernstein heat transfer correlation as an option. This correlation is available from the External Convection Model drop-down list on the Methods tab of the Calculation Settings Editor. The Churchill-Bernstein correlation has a wider range of applicability than ESDU and is an accepted alternative for heat transfer calculations.
The following options are available to determine how Aspen Flare System Analyzer calculates forced convection heat transfer. See Heat Transfer Correlations for further details.
ESDU: This method was always used to calculate forced convection heat transfer for versions prior to V14. When you open a file created prior to V14, ESDU is selected.
Churchill-Bernstein: The Churchill-Bernstein correlation has a wider range of applicability and is an accepted alternative for heat transfer calculations.
The validity limits are as follows:
For ESDU: Re < 104 (as per the ESDU 69004 standard)
For Churchill-Bernstein: Re Pr ≥ 0.2
When any of the pipes exceed the applicability limits of the heat transfer correlation, a warning message appears.
At the conclusion of theSolution of the network, if any of the pipes exceed the applicability limits listed above for the heat transfer correlation, the following warning message appears: Conditions out of applicable range for external forced convection heat transfer coefficient for pipe <Pipe Name> on Scenario <Scenario Name>. Results may not be reliable.
Keywords: Churchill-Bernstein, Forced convection heat transfer calculations, External Convection Model.
References: None |
Problem Statement: You may get errors as “Failed to scan metadata for some tags on “Servername”. Failed to scan tags in tag list if AspenOne Process Explorer is unable to scan IP21 database.
This article provides troubleshooting steps if AspenOne Process Explorer A1PE fails to scan IP21 Database tags. | Solution: 1. Please make sure that IP21 server is up and running.
2. Open ADSA tool as administrator.
Check and Verify ADSA I configured correctly.
3. If the ADSA is configured correctly, then go to the sample pages.
On the Web server, run internet explorer as administrator,
Open http://localhost/processdata/samples/Sample_DataSources.aspx
And click on Issue Request,
4. Here on this page, you will see what all Data sources are configured
5. Next step is to verify if you are able to connect to the data source.
Click on the Home button on this page
6. Go to browse and again click on ISSUE REQUEST. This will browse for a list of tags
You will see the error description here in the bottom.
7. Open file explorer and browse to C:\Program Files\AspenTech\MES\ProcessData.
Run the ProcessDataAdministrator.exe.
Select Data Source and Validate if the tag is successful.
Keywords: Failed to scan metadata
Failed to scan tags in tag list
ADSA
References: None |
Problem Statement: Why Aspen HYSYS Economic Analysis module displaying Material cost as 0 , in the Operating cost analysis module? | Solution: The Aspen HYSYS V14 has been launched with stunning new features in the economics module.
The Economics tab of the ribbon provides an interface to automatically send data from the unit operation models in your simulation to Aspen Process Economic Analyzer (formerly Aspen Icarus Process Evaluator). There are also links to parts of HYSYS related to economics.
Stream Price: Open the Model Summary Grid page to specify stream price factor and stream price basis for streams and view cost flow.
Process Utilities: Open the Process Utilities Manager page to define process heating, cooling or power utilities. You can define the energy content, price, and greenhouse gas contribution for each type of process utility.
Cost Options: Open the Costing Options page to specify the costing template and other investment options.
For operating costs analysis, raw materials and product rates should be specified. After adding/updating the rates, project should be re-evaluated.
To add the cost rates, follow the steps as given below:
Open the material or product stream.
Click on Worksheet >> Cost Parameters.
Enter the Stream price and choose the appropriate stream price basis.
To do the economic evaluation, follow the below guidelines:
Once the inputs have been updated, activate Economics.
Check Economics Active
Click on Map >>Size >> Evaluate
To generate the report, click on Investment Analysis.
Keywords: Economics, Cost parameters, Material cost, Mapping, Sizing
References: None |
Problem Statement: It is a common issue that after deleting cache/temporary files or applying certain patches that the loaded applications from the Online section on DMC3 Builder disappear, how can you add them back? | Solution: It is a known issue that after deleting cache/temporary files or applying certain patches that the loaded applications from the Online section on DMC3 Builder disappear:
This doesn’t necessarily mean that the applications that were loaded to the APC Online server got deleted, on most cases what occurred is that the DMC3 Builder connection to the server is what actually got deleted:
To get the applications back on DMC3 Builder all you need to do is recreate the connection. Go to the Online -> Servers tab select Add, then type the Server name that you want to give it, the Host should be the hostname of the APC Online server machine, and the Port Number used by default is 12346:
After you add it and the status shows Connected, the applications should reappear on DMC3 Builder.
Keywords: DMC3 Builder, online, servers, applications, controllers
References: None |
Problem Statement: This Knowledge Base article provides Explanation on how to create a Skeleton Model or Empty Model in DMC3 | Solution: A skeleton model has no response curve definitions but allocates space for data that corresponds to a particular set of tag names, or variable names that you can provide any time, later, during application development. After you complete the create skeleton model operation, the resulting skeleton model is added as a new application in the Controllers navigation tree.
Note: This feature creates only FIR model-based applications (DMCplus or DMC3).
The option for creating a new application with an empty, or skeleton, model is available if you begin from the Datasets view or Controllers views. Procedures for either approach are provided below.
To begin from the Datasets view and create a new skeleton model application:
First, access the Dataset view or the History view for viewing trend plots of vectors included in a dataset.
From the right side of the main ribbon menu, click Create Model.
As a result, either of the following occur:
If you are working in an APC project, the Model Type Selection dialog box is displayed.
--or--
If you are working in a DMC3 project, then display of the Model Type Selection dialog box is skipped, and the Identify Model - Specify Structure dialog box is displayed. This is because all applications in an Aspen DMC3 project are based on the FIR, or DMCplus, type of model.
If the Model Type Selection dialog box is displayed, select DMCplus: A linear finite impulse response model for DMCplus, and then click OK.
As a result, the Identify Model - Specify Structure dialog box is displayed.
In the Identify Model - Specify Structure dialog box, select the Empty model option, and complete other parameters in the Properties pane. Then click OK.
To begin from the Controllers views and create a new skeleton model application:
Click Controllers from the navigation pane selectors (lower left corner of the main window). The Controllers navigation tree is displayed.
From the Controller Tree ribbon group, main menu, click New Controller.
The Create Skeleton Model dialog box is displayed.
If you are working in an APC project, select DMCplus from the Model Type drop-down list. If you are working in a DMC3 project, the Model Type drop-down list is not displayed. This is because all applications in an Aspen DMC3 project are based on the FIR, or DMCplus, type of model.
In the Create Skeleton Model dialog box, select the Empty model option, and complete other parameters in the Properties pane. Then click OK.
Keywords: DMC3, Skeleton, Empty model, controller, Dataset
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve error message pops up on DMC3 builder (one or more Validity limits have not been initialized. Setting uninitialized validity limits to engineering limits.) | Solution: Several customers reported the following error in DMC3 Builder with a pop-up message as follows:
To resolve this issue:
Navigate to Deployment
On the top Ribbon go to Online Setting
Look for any red highlighted validity limit.
changing this value to proper value will resolve the issue and stops the error message from popping up.
Keywords: Validity, uninitialized, limits, DMC3
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the issue of not able to simulate the FIR intermittent variable in the DMC3 Builder Simulation | Solution: Simulating the FIR intermittent variable enables you to treat intermittent variables during simulation the same as they would be treated in an online system
An intermittent variable periodically requires manual entry of a measured, or process, value within a specified number of cycles. The NewPVInput parameter is set to Yes (either manually or through a calculation or database connection) to indicate to the controller that a new value is available. This is a requirement of intermittent variables when the application is running online. If the value of an intermittent variable is not updated within a specified timeout period, then the value for the variable is marked as bad
To use the option, Simulate FIR intermittent:
Access the Options dialog box:
In the File tab of the main menu, click Options. The Options dialog box is displayed.
From the Categories list (on the left), select Advanced Options.
In the Preferences table (on the right), locate the Simulate FIR intermittent variables option.
Then click the drop-down list (on the right) to select either of the following settings, as appropriate:
False – Allows simulations to run as though the NewPVInput parameter has been reset to Yes every cycle. This is the default.
True – Allows simulations to run so that the NewPVInput parameter requires updating, from No to Yes, every cycle. For intermittent variables, this updating must occur manually.
Click OK or Apply to save any changes made in selected options.
Keywords: Intermittent Variable, Simulate, FIR, DMC3, NewPVInput
References: None |
Problem Statement: In the DMCplus CCF, there are some array entries that are configured as individual entries including:
General (under DMCplus Build > Configure section): Future Move Times (FMOVT), Compressed Prediction Times (PDEPT)
Independents: Future Moves (FMOV)
Dependents: Compressed Predictions (PDEPC) and Predictions with Control (PFMDEP)
For example, if using a setting of 8 future moves, there will be 8 entries for future move times:
FMOVT001, FMOVT002, ..., FMOVT008.
The entries under Configure section are always available and the entries under Independent and Dependent sections are available for the user to directly map to individual IO tags if they enable the setting under Tools > Options > Engine tab > Output future moves and predictions to PCS Tags (FPENB).
Since these entries are provided in the CCF as individual entries, it is straightforward to map them to individual DCS tags. How can the user do the same in DMC3 Builder? | Solution: In DMC3 Builder, these built-in entries are configured as a Double Array data type. Even if the user tried to map them to an IO tag in the Deployment section, using the Customize button to add an IO tag for these entries, it only adds one IO tag available to be mapped:
One tag added for FutureMoveTimes, instead of individual tags for the total number of moves:
The workaround for this is to:
Create user defined entries for each value of the array and set Keyword to Write
Then create an output calculation to map the user entries to the built-in entry's array, for example: UserEntry_FutureMoveTime01 = FutureMoveTimes[1]
Then in Deployment view, use Customize button to add the user defined entries in the list of IO tags, and then map them to the DCS tag names.
Steps to follow with example:
*IMPORTANT NOTE: see attachments at the end of the article below for an XML file with already configured user entries and calculations that can be imported in DMC3 Builder Calculations section (V12.1 and higher) as a starting point for thisSolution. In the calculations view, click Import Calcs, select this XML file, and click Merge to import the relevant calculations and user entries. Unfortunately, the properties of user entries are not retained, so you will need to manually change the Keyword for the user entries to be Write. If using this imported XML file, steps 2 and 3 below can be skipped.
1. First set the Future Move Settings in the controller's Simulation view > Move Settings button under Actions menu.
In this example we are setting it to 14 future moves:
* Before moving on to Step 2, see Important Note in red above.
2. Next, go to Calculations view > User Entries and create the entries as required for a total number of moves.
2a) In the Dependent variables section, create 14 entries for Compressed Predictions and 14 entries for Prediction with Control (note that creating the entries in the Dependent section will make these entries available individually for each dependent variable when mapping them in the calculation or to the IO tag later; there is no need to create them individually for specified variables).
Set user entries Data Type to Double and in the Properties section, set the Keyword to Write.
2b) In the General variables section, create 14 entries for Compressed Prediction Times and 14 entries for Future Move Times. Set user entries Data Type to Double and in the Properties section, set the Keyword to Write.
2c) In the Independent variables section, create 14 entries for Future Moves. Set user entries Data Type to Double and in the Properties section, set the Keyword to Write.
3. Go to the Outputs Calculation section and create calculations that map the user defined entries to the built-in entry's arrays.
This can all be done in one calculation or individual calculations; in this example we will make 5 calculations for each of the entries.
3a) In the first output calc, we will write the Future Move Times calculation like this:
User_FutureMoveTime01 = FutureMoveTimes[1]
User_FutureMoveTime02 = FutureMoveTimes[2]
...
User_FutureMoveTime14 = FutureMoveTimes[14]
Then map the calculation parameter User_FutureMoveTimexx to the user defined entry (created under General section) called User_FMOVTxx and map the calculation parameter FutureMoveTimes to the built-in entry (under General section) called FutureMoveTimes.
3b) Repeat the same for Compressed Prediction Times if needed. (Tip: copy the calculation code from FutureMoveTimes and use the Find and Replace button on the top right to change the calculation parameter names).
3c) Repeat the same for MV's Future Moves, remember this time the variable entries to be mapped are going to be found under the Independent section. This will apply to all Independent variables, there is no need to repeat the calculation to define it for each MV.
3d) Repeat the same for CV's Compressed Predictions, remember this time the variable entries to be mapped are going to be found under the Dependent section.
3e) Repeat the same for CV's Predictions with Control. The variable entries to be mapped are going to be found under the Dependent section again.
4. Go to Deployment view, use the Customize button on the top tools ribbon to add the created User Entries and then map them to the IO Tags.
4a) For General entries to be added, click on the Controller Name under Tag Generator, then click Customize to see the user entries for Compressed Prediction Times and Future Move Times:
Tags added for IO mapping:
4b) Similarly for Future Moves, click on an Independent variable from the list and use Customize to add Future Moves user entries. You can also use the radio button apply the same change to all variables of the same type to add them automatically to all the MVs.
Tags added for the selected MV:
Tags added automatically for the other MVs:
4c) Repeat the same for Compressed Predictions and Predictions with Control by clicking on a Dependent variable.
This completes the steps to the workaround.
============================================================
Discussion for Enhancement
The above procedure can take some time when initially implementing the workaround in DMC3 Builder so there was an enhancement request submitted:
Enhancement APC-I-1476: DMC3 Array Entries Should be Able to Map to Individual IO Tags Without Having to Create User Entries and Calculations
After discussing with Product Management, it looks like there are currently no plans to implement this change for a number of reasons. One is that there isn’t a big use case for mapping these entries to write to the DCS because the PCWS web interface already displays this information in a graphical format. When you click on the variable name on the web page and view the detail plot, it shows future moves (FM) over time for the MVs, and the CVs show both open loop (OL) prediction (i.e. Compressed Predictions) and closed loop (CL) prediction (i.e. Prediction with Control). Most users have the PCWS to view this information, so they don’t require writing to DCS and plot them.
For example, CV and MV plots from the PCWS:
The other reason is that mapping all of these entries and sending the info to DCS adds a lot of IO traffic that can cause communication performance issues. This may not be a problem in all cases but some sites with a large number of variables and tags can run into that network bottleneck.
Although it is understandably quite tedious to implement this workaround of user entries and calculations for one controller the first time, you can standardize it for the rest of the controllers by using the “Export Calc” and “Import Calc” buttons available in V12.1 and higher. This way it is more work in the beginning but when you copy it to other controllers, the user entries and their calculation mappings will still be there and makes it easier the next time you want to implement this same configuration.
This KB article with the attached XML file is also provided to users to help make this implementation easier.
Keywords: dmc3, array, entries, future, move, tag, mapping, IO, compressed, prediction, time
References: None |
Problem Statement: When trying to deploy a controller from DMC3 Builder, it fails to deploy and shows an error message:
There are errors with the configuration of online settings. | Solution: This error can appear when there are conflicting engineering or validity limits in the Online settings.
Select the Deployment tab and click on Online Settings on the top left here:
If an engineering limit is in conflict with an operating limit or if a validity limit is in conflict with an engineering limit, these input or output validation limit cells will be highlighted with a red background. If you hover over the red cell, it will show a tooltip of what it is in conflict with.
For example, if I set the Lower Engineering limit to be lower than the Lower validity limit, it shows this:
To resolve this issue, change the Engineering or Validity limits in these cells to not be in conflict, or go to Simulation view to change the Operator Limits.
The order of increasing the range of the limits should be Operator, Engineer, then Validity limits. The operator low limit should be lower than engineer low limit and engineer low limit should be lower than validity limit. The operator high limit should be lower than engineer high limit and engineer high limit should be lower than validity limit.
Lower Validity < Lower Engineer < Lower Operator < Upper Operator < Upper Engineer < Upper Validity
Keywords: dmc3, builder, online, settings, error, deployment
References: None |
Problem Statement: When trying to analyze Aspen Watch data for some tasks it is easier to do it using a different tool such as Microsoft Excel, how can we extract data from the archives so we can view it using this tool? | Solution: There is a proposedSolution of using the legacy “Process Data” Excel add-in but since this no longer supported, the best option is to use SQLplus to extract data just like how you would do it with any other InfoPlus.21 information.
Here you can find an example code of the query that we will be using:
SET COLUMN_HEADERS = 1;
SET VALUE_BETWEEN =',';
SET HEADER_BETWEEN =',';
SET HEADER_LINE = '';
SET Output 'C:/Temp/Result.csv'; --Change this for a path on your system
Select NAME, AW_H_TIME_1, AW_SSDEP_H from AW_DEPDEF --Aspen Watch parameters to extract from either AWDEPDEF or AW_INDDef
Where name = 'C01D_COLDP' --Variable name
and AW_H_TIME_1 between '01-DEC-22 00:00' and '21-DEC-22 16:00'; --Alternatively, use CURRENT_TIMESTAMP
SET Output Default;
The lines that need to be modified are lines 5 – 8, there are already some comments but let’s go over them:
5. Define on which location you want to save the resulting .csv file, by default we are setting it on the C:/Temp folder.
6. Select which parameters or columns you want to extract from the Aspen Watch records, these are saved on the AW_#_IN_MEMORY… repeat fields of AW_DEPDef and AW_INDDef:
If you are not sure how a parameter is named on Aspen Watch records, you can check the DMC3 Entry Dictionary:
7. Declare the variable name, again you can review this under AW_DEPDef or AW_INDDef. The prefix format is “C” followed by the Aspen Watch Controller ID and then “D” for dependents or “I” for independents.
8. Select the time interval from which you want to extract data, you can use CURRENT_TIMESTAMP as an end date to extract all data since a defined moment. The time format is ‘DD-MMM-YY HH:MM’ (time is 24-hour format).
After the query is ready you simply need to click on the execute “!” button to generate the .csv file on the desired location.
There is of course extra customization that can be done, such as scheduling this query on InfoPlus.21 to create reports every so often, but what is shown is just the general example of the format that needs to be followed to generate .csv files from Aspen Watch records.
Keywords: Aspen Watch, Excel, extract, SQLplus
References: None |
Problem Statement: What is the importance of Mat_Cost.ans file for the routing configuration in Aspen OptiPlant Model project folder? | Solution: Aspen OptiPlant 3D Layout (OptiPlant) is a tool for piping designers and engineers to rapidly build conceptual 3D model. OptiPlant enables you to model 3D equipment & structures and automatically route 3D interference free pipe. OptiPlant is used in the proposal, front-end loading and FEED stage of optimizing and study plot plans, conceptualizing the design and produce an accurate piping and structural MTO. When we create a new project model in Aspen OptiPlant, some relevant files such as Pipe_spec, Mat_cost, Projspec etc are been automatically generated.
The Mat_cost file contains relative cost factors for each type of material. The materials listed in PIP_SPEC.DAT file must be entered here in order to include the cost as a factor for the router. The cost factor of each material listed in this file effects the automatic SEQUENCING function during batch creation.
The default location is <Drive>:Program Files (x86)\AspenTech\Aspen OptiPlant <version>\Data\ANSI\ Mat_cost.ans. It is recommended to copy the file to the local project folder and edit this file to put in project specific specs, the program will look here first for the file.
Mat_Cost.ans file is important for routing configuration in the Aspen OptiPlant 3D Layout because, if you do not have the material ID and the relative cost in your file, the value for the material cost for the project will be considered as 0.
To find the cost of pipe, the material cost will be multiplied with the diameter and temperature of the pipe.
The pipe lines will be sorted in the excel file according to the calculated cost; the high cost pipes will be listed above and pipes with cheaper cost will be listed below in the Excel tabular column.
Keywords: Material cost, Pipes, OptiPlant, Project, Routing
References: None |
Problem Statement: Why does the license consumption recorded as SYSTEM in the user name, instead of the actual user name in the aspenONE SLM License manager? | Solution: The aspenONE SLM License Manager opens automatically connected to the current configured license server. The Locking Information and Configuration Information sections display SLM and system information, including the configuration settings set using the SLM Configuration Wizard. This information helps you understand how SLM is configured on your computer and is required by AspenTech to generate your license files. You may be asked to copy and send your configuration information to AspenTech when requesting a license or when troubleshooting a licensing problem with an AspenTech Support representative. The Copy to Clipboard button at the bottom of the window allows you to copy all the data displayed in this window.
The product SLM_CIMIO_Core is not a part of Aspen HYSYS tokens. It is one of the service running in your machine which is consuming the license. We recommend to stop the service of Aspen CIM-IO Manager from the services window, to release the license.
Note: Aspen CIM-IO is a service used to connect the HYSYS model with DCS/OPC/Process Historian.
Keywords: CIM-IO Manager, User name, License
References: s
https://esupport.aspentech.com/S_Article?id=000057458
https://esupport.aspentech.com/S_Article?id=000086893 |
Problem Statement: How can we change the Units of Measurement in the middle of building a project model in Aspen OptiPlant 3D? | Solution: Once you have provided a name to the plot-plan file, next step is to set the properties of that plot-plan by assigning the working-units and the co-ordinates of the layout. This working-units set by the user in the beginning of the project will be used through out the modeling.
We cannot directly change the Units of measurement in the middle of a project modeling stage in OptiPlant.
The units will appear greyed out after the initial plot plan properties are set.
To change the units of measurement, export your project model to an Excel file. 
Open the saved Excel file and proceed with the units conversion as per your requirement. 
Keywords: Aspen OptiPlant, Units, Excel, Export
References: None |
Problem Statement: Users may see following popup while doing simulation on APS. How do they do to prevent this error? | Solution: This error (unable to open the debug file) gets generated when TIMR keyword is set to Y and APS was not able to write the Orion.csv file. Following is what is in the APS help:
If TIMR is set to Y, performance profiling for units will be generated and saved to (WorkingFolder)\(Model ID)\Orion.csv. So change this TIMR keyword in the Config table to 'N' will help to solve this issue.
Keywords: None
References: None |
Problem Statement: What are the types of compressors I can find at ACCE V12? | Solution: Within Aspen Capital Cost Estimator, there is a classification of three different types of compressors, which are:
Air Compressors (AC)
Item type Description
CENTRIF M Centrifugal air compressor with moto
CENTRIF T Centrifugal air compressor with turbin
RECIP GAS Reciprocating air compressor with gas engine
RECIP MOTR Reciprocating air compressor with motor
SINGLE 1 S Single reciprocating air compressor - 1 stage
SINGLE 2 S Single reciprocating air compressor - 2 stage
Gas Compressor (GC)
Item type Description
CENTRIF Centrifugal compressor - horizontal
CENTRIF IG Centrifugal - integrated gear
RECIP GAS Reciprocating compressor - integral gas engine
RECIP MOTR Reciprocating compressor
Fans, Blowers (FN)
Item type Description
PROPELLER Propeller Fan
VANEAXIAL Vaneaxial Fan
CENTRIF Centrifugal Fan
ROT BLOWER General purpose blower
CENT TURBO Heavy duty, low noise blower
Keywords: Air compressor (AC), Gas compressor (GC), Fans, Blowers (FN)
References: None |
Problem Statement: What are the available licensing options for Aspen InfoPlus.21 starting from aspenONE V8.5 (Up to V12.2)? | Solution: The following licensing options were introduced starting from V8.5 and are applicable up to V12.2:
License Type
License Key
Usage
Standard
SLM_InfoPlus21_Points
For production environments, 32 bit
Development
SLM_RN_PME_IP21DEV_TK
For development environments, available on 32 bit installations only
Standard
SLM_RN_PME_IP64_PRDSRV
SLM_RN_PME_IP64_CLSSRV
For production environments, 64 bit only
For InfoPlus.21 64 bit on a MS Cluster system
(Available for customers on New Commercial License (NCM) model only)
Embedded
SLM_InfoPlus21_Embed
For applications that have an embedded IP.21 License
OEM
SLM_RN_PME_IP64_OEMSRV
Original Equipment Manufacturer
In order to configure the appropriate Aspen InfoPlus.21 license type, please follow the directions in KB 140939
Keywords: Aspen InfoPlus.21 License Type
Standard
Embedded
Development
Backup
References: None |
Problem Statement: Starting in V14, the Aspen APC Web page (URL <WebServerName>/aspenapc) gives users the ability to enable and disable calculations for a deployed controller running online with toggle buttons labeled Input Calc Processing and Output Calc Processing. This is intended to be an Engineer-tunable parameter due to safety risks, however, it is by default available for Operators to change as well. This article provides the | Solution: to make these toggle switches to be Read-Only for Operators and Read-Write for Engineers.
Solution
Steps to make these Calc Processing toggle switches to be only changeable by users with Engineer permissions:
1. On the APC Web Server, navigate to this folder: C:\ProgramData\AspenTech\APC\Web Server\Products\APC
2. Open this file in a Notepad to edit: apc.product.config
3. Scroll down to lines 445 and 449 and change this entry from OperatorChange to EngineerChange:
4. Save and close this file.
5. Open a command prompt window (CMD) and run the command: IISRESET
After these changes, the two toggles should remain visible for Operators and Engineers but users with Operator permissions will not be able to take action on it.
Keywords: apc, web, calc, calculation, processing, toggle, switch, button, permission, engineer, operator, read, write
References: None |
Problem Statement: Cim-IO OPC UA interface processes (read/write/unsol) rejects the untrusted certificate of the target OPC UA Server | Solution: This can be identified by finding following error messages in the log file of Cim-IO processes (read/write/unsol):
cimio_ua_clientsession .Session failure, can't connect to Opc UA server
Exception:Opc.Ua.ServiceResultException: Certificate is not trusted.
Please follow the steps enlisted below to trust OPC UA Server’s certificate
1. Launch OPC UA Configuration Tool and choose “Manage Security” tab and click on “Find…”
2. Go to Program Files (x86) | Aspentech | Cim-IO | io | cio_opc_uai
3. Choose the cimio_opcua_read.exe file
4. Click Open
5. On Configuration file click on Browse… | Go to Program Files (x86) | Aspentech | Cim-IO | io | cio_opc_uai | choose cimio_opcua_read.config.xml
6. Click OK
7. Then Click on Select Certificate to Trust
8. In Store Type keep Directory
9. For Store Path go to RejectedCertificate folder of Cim-IO OPC UA Interface processes (C:\ProgramData\OPC Foundation\RejectedCertificates) and trust the OPC UA server's certificate by selecting the certificate and clicking OK
This way we establish trust between the Cim-IO OPC UA Interface’s Read process and the target OPC UA Server.
The below example contains the rejected certificate of AspenTech InfoPlus.21 OPC UA Server
10. Repeat from step 3 until Step 8 for Cim-IO OPC UA Write and Unsol process.
Keywords: BadCertificateUntrusted
Rejected server certificate
References: None |
Problem Statement: In the ACO platform, the user could copy over the CCF and MDL files to the Online Server and deploy from there, without opening the DMCplus Desktop tools to do so. In the RTE platform, the user has to connect the DMC3 Builder desktop tool to the Online server and deploy from there. Is it possible to mimic the ACO platform procedure of copying files over? How can I deploy an RTE application that was configured in DMC3 Builder without using the desktop tool connected to the Online Server? | Solution: The common use case for this question is trying to avoid the consumption of 4 tokens that DMC3 Builder takes when opened. Some users want to save those tokens to be used by the running application once it is deployed and started online.
Here are the workarounds that can be used to address this request:
Wordaround 1: Deploy from DMC3 Builder, Close Out the Program, and then Start Controller from PCWS Manage
If the user's requirement is only not to consume the 4 tokens of having DMC3 Builder open while, at the same time, consuming tokens for a controller running online on the same machine, this workaround allows the user to do it one step at a time that will release the tokens of DMC3 Builder before starting the controller online:
On the APC Online server, the user can open the DMC3 Builder project and deploy the controller to the local server (but don't start the process yet).
Once deployment is completed at 100% in the Online section of DMC3 Builder, close the software. This will release the 4 tokens in use.
Then open the PCWS > Online tab and select Manage from the header. This Manage section of PCWS is the equivalent to APCManage used for DMCplus controllers in the legacy platform but can also be used to Start and Stop RTE controllers. Click on the application name and Start the controller process.
If the controller needs to be redeployed in the future, make sure to use PCWS > Online > Manage to first stop the controller process (to release the tokens used by the running controller), then open DMC3 Builder, which will take up 4 tokens. The same is true if the user wants to take a snapshot of the RTE controller, it will require DMC3 Builder project to be opened and cannot be done by PCWS Manage page.
Wordaround 2: Use dmc3_manage.cmd Utility
This utility is a way to mimic the procedure of the legacy ACO platform where the user can:
Copy over the application file to the Online Server directory
Then run the dmc3_manage.cmd utility from the Command Prompt to Deploy the controller online
This utility has a lot of other capabilities as well, including Starting the controller process, but this can also be done from PCWS > Manage
In this article's attachments below is a zip file with an example and supporting documents about how to use the dmc3_manage.cmd utility. The advantage of this workaround is that the user does not need to open DMC3 Builder at all for the deployment process.
Wordaround 3: Deploy Controller from Remote Desktop Machine
If the user's requirement is to avoid opening DMC3 Builder on the APC Online Server but is able to consume the tokens on a remote Desktop server without issue, then DMC3 Builder also supports remote deployment of the controller using the following steps:
Before trying to deploy, verify that the remote desktop and APC online servers are able to ping each other and open default port 12346 if a firewall exists between the machines.
Open DMC3 Builder on a remote Desktop machine.
In DMC3 Builder > Online view > Servers section, click Add from top tools ribbon and use the APC Online Server name as the Host. This is also the dialog where default port number 12346 is assigned.
Click OK and verify that the Status shows Connected. If it fails, check the communication between the machines.
Deploy the controller from remote desktop server to APC online server.
Keywords: dmc3, rte, apc, deploy, remote, without, builder, token, release, manage, dmc_manage
References: None |
Problem Statement: It’s critical to make sure that refinery planning models are accurate and in sync with changes in operating conditions or catalysts. However, updating planning models requires extensive effort and time due to complex and segmented workflows and lack of collaboration between planners and process engineers. Many refineries are dependent on external consultants — thereby hindering regular updates and leading to lost profits.
In this example, you will learn how process engineers or planners can leverage the automated and streamlined Planning Model Update (PMU) workflow powered by Aspen HYSYS V9 or V10 to update refinery Planning models and:
Promote collaboration between planners and process engineers
Reduce time and effort required for updates
Boost profits through greater planning model accuracy
Become self-reliant in maintaining refinery planning tools
This example follows the workflow of using a pre-configured Excel PMU template and pre-configured HYSYS model. | Solution: You will learn to:
Get Plant Data:
Obtain plant data via Excel Add-In for calibration
Calibrate model to match plant data:
Tune the rigorous reactor model using plant data to obtain new sets of calibration factors
Validate model predictions:
Get the HYSYS model to match your plant data closely by validating the factor sets
Create LP base and shift vectors:
Set up the base case and run simulations by perturbing the shift variables to calculate the shift vectors
Update LP sub model:
Validate the updated vectors and present simulation data in desired format for PIMS
Comprehensive view of Plant Data vs Rigorous Model Prediction vs LP Prediction:
Track the HYSYS and LP model predictions closely against the Plant Data
If you want to download the blank templates only, please use this link. (https://esupport.aspentech.com/S_Article?id=000100523)
If you are interested in the Aspen Hybrid Models for Planning, please use this link. (https://esupport.aspentech.com/S_Article?id=000098202)
Keywords: Aspen Petroleum Refining, HYSYS, PIMS, Planning Model, FCC, Reformer, Hydrocracker, Rigorous Reactors, Refining Reactors
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve an issue where http://localhost:8080/AspenCoreSearch does not load with an Error 404 on the webpage, indicating possible missing components. Also, the Direct to Plot option does not work when sending tags from graphics to trends. The trend will flash for a second, but then only a blank trend plot can be seen with nothing in the legend. | Solution: If AspenCoreSearch returns a 404 error, it may not have deployed correctly. If the Scheduler folder (see reference below) under the Tomcat appdata folder is not present, double-click the Tomcat7w (Tomcat8w or Tomcat9w ) application, depending on Aspen software version installed, in the Tomcat bin folder.
Please note:
For AspenOne Process Explorer V11 or earlier, java version recommended is Java 8.
For AspenOne Process Explorer V12 and later, java version recommeded is Java 9.
Click the Java tab and make sure Java Virtual Machine points to a JDK or JRE for Java 7 (8 or 9). Set it using the ellipsis button if not set correctly.
Next, verify that the Tomcat webapps folder contains both the SOLR and AspenCoreSearch war files.
Tomcat folder can be found at this location:
v12 - the solr service is separated from the tomcat folder
C:\Program Files\Common Files\AspenTech Shared\Tomcat9.0.27
C:\Program Files\Common Files\AspenTech Shared\solr-8.2.0
v11
C:\Program Files\Common Files\AspenTech Shared\Tomcat8.5.23
v10
C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat8.0.36 (32-bit)
If the Java path is correctly defined for Tomcat and the War files exist then:
Stop Apache Tomcat service
Rename Tomcat<version>\appdata folder (to appdata_old, for example). The appdata folder and its subfolders will be recreated when Apache Tomcat service is restarted. Any missing or customized xml files in the newly created folders will have to be manually restored from the backup appdata_old folder. (E.g. \appdata\solr\collection1\conf\AspenSearchSolrSecurity.xml and files in the \appdata\scheduler\config\jobs folder).
Delete the solr and scheduler folders under appdata.
Delete the AspenCoreSearch and SOLR folders in WebApps.
Delete Tomcat's work folder.
Delete the files in the logs folder.
Start Apache Tomcat service.
Verify that the new appdata folder and its subfolders renamed in step 2 have been recreated.
Stop Apache Tomcat service again. Manually restore any missing xml files in the scheduler jobs folder (see step 2, use files backed up in appdata_old).
Restart Apache Tomcat service.
Wait about 5 minutes until the SOLR and scheduler apps deploy.
Verify that you can log into Scheduler by navigating to http://localhost:8080/AspenCoreSearch/ and log in using the default credentials (admin, admin). Next, exit and stop the Apache Tomcat service. Following that run the aspenONE Credentials Tool to define the Domain security information. Turn off Domain Security to see if everything deployed correctly (the directories described above have been recreated). Start Tomcat. Try querying SOLR directly, e.g., http://localhost:8080/solr/select?q=*&username=domain\user where domain and user correspond to typical user using A1PE (not needed if Domain Security is turned off).
If you still can't get Tomcat to work, get the Tomcat logs in the logs folder as per respective version and send them to AspenTech Support for analysis.
Keywords: AspenCoreSearch
404
References: None |
Problem Statement: What is the file location of the templates that are listed on the Aspen Process Data ribbon? | Solution: Templates can be organized and stored for reuse for the following add-in categories:
Get Calculated Value
Get Calculated Values
Get Current Value
Get Historical Value
Get Historical Values
SQLplus Query
They are stored in a XML file called ExcelFormulaTemplates.xml in the folder:
C:\Users\%USERNAME%\AppData\Roaming\AspenTech\PME\ExcelAddin
You can directly edit this file using a text editor and share it with other users by copying it into their corresponding folder. The file itself gets read by the Aspen Process Data add-in when the user begins a new Microsoft Excel session.
Keywords: Process Data Add-In
Excel Add-In
Templates
File Location
137757-2
References: None |
Problem Statement: Statistical Process Control (SPC) tags have binary coded alarms stored in a field named Q_SUBGROUP_ALARM, how to display the alarms as comma separated string list with the names defined in Q_XR_ALARM_RULES? | Solution: Here follows an exampleSolution using Aspen SQLplus which extracts this information from records defined by Q_XBAR21def, only returning alarms historized within the last week:
function get_Q_ALARM_DESCR(Q_SUBGROUP_ALARM int)
local ret Char(600);
local sep;
ret = '';
sep = '';
for (SELECT 1st_selection_value+OCCNUM-1 Selection Value,
Q_ALARM_DESCR FROM Q_XR_ALARM_RULES.1) do
if BIT_AND(Q_SUBGROUP_ALARM, 1)=1 then
ret = ret || sep || Q_ALARM_DESCR;
sep = ', ';
end
Q_SUBGROUP_ALARM = BIT_SHIFT(Q_SUBGROUP_ALARM, -1);
if Q_SUBGROUP_ALARM=0 then
return ret;
end
end
return ret;
end
select NAME, Q_SUBGROUP_ALARM, get_Q_ALARM_DESCR(Q_SUBGROUP_ALARM) from
Q_XBAR21def
where Q_SUBGROUP_TIME > CURRENT_TIMESTAMP - 168:00:00;
Keywords: Q_XBARDef
Q_XBARSDef
Q_XBARCDef
Q_XBARCSDef
Q_CDef
Q_NPDef
Q_PDef
Q_UDef
Q_XBAR21Def
Q_XBARS21Def
Q_BatchXBARDef
References: None |
Problem Statement: How Do I Activate a Cloud Connect License? | Solution: Before starting the following procedures, you must ensure that your license file is correctly activated if a standard or premium license was purchased.
To activate the license file:
1. After the installation, open the Google Chrome browser, and type the URL:
https: //<IP address of Aspen Connect machine>:6584
2. From the following page, click Advanced to expand the Advanced section.
3. From the Advanced section, click proceed to <Aspen Connect machine
address> (unsafe) to open the Aspen Connect Login page.
4. From the Aspen Connect Login page, use the default username admin,
and the default password admin to log in. The Aspen Connect main
page opens.
5. From the Aspen Connect main page, click . The Aspen Connect
License Activation dialog box opens.
6. From the Aspen Connect License Activation dialog box, click to
select the license.dat file, and then click Activate.
7. After the license is activated successfully, the supported data collecting
and publishing protocols will be displayed.
8. Click Close to complete.
Keywords: None
References: None |
Problem Statement: The aspenONE Search page displays the waiting spinner icon continuously, it never stops:
Elsewhere within aspenONE Process Explorer, the search features appears to work correctly:
Exploring this issue further by making use of the developer tools in the web browser, there is a 404.0 - Not Found response being returned when opening the aspenONE Search page:
Similar response is returned from the following URL pasted into a web browser opened directly on the aspenONE web server: http://localhost/ProcessData/AtProcessDataREST.dll/Search?&url=/aspenONE/aspenONESearch.svc/&cond={text:facet.field=ace_nav&facet.field=sub_ace_nav&start=0&rows=1&q=*:*} | Solution: This problem would be apparent if Information Internet Services (IIS) does not have some necessary handler mappings for *.svc for the aspenONE web site. For example, if the following mappings are missing then the symptoms described above will be evident:
svc-Integrated-4.0
svc-ISAPI-4.0_32bit
svc-ISAPI-4.0_64bit
To add these missing handler mappings, open Windows PowerShell on the web server (use Run as administrator option). Paste the following into PowerShell window:
New-WebHandler -Name svc-Integrated-4.0 -path *.svc -verb * -type System.ServiceModel.Activation.ServiceHttpHandlerFactory, System.ServiceModel.Activation, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 -preCondition integratedMode,runtimeVersionv4.0
New-WebHandler -Name svc-ISAPI-4.0_32bit -path *.svc -verb * -modules IsapiModule -scriptProcessor %windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll -preCondition classicMode,runtimeVersionv4.0,bitness32
New-WebHandler -Name svc-ISAPI-4.0_64bit -path *.svc -verb * -modules IsapiModule -scriptProcessor %windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll -preCondition classicMode,runtimeVersionv4.0,bitness64
iisreset
You can ignore Error: Cannot add duplicate collection entry messages if they occur, but if at least one command works without error then thisSolution is likely to resolve this issue. iisreset command will restart all IIS services. You should then find the earlier URL used will return a response without any evidence of error:
aspenONE Search page should now be returning results.
Keywords: Search for Everything
Upgrade upgrading
References: None |
Problem Statement: After evaluating and running a project, this message appears: System.Data.SqlClientSqlException (0x80131904) Invalid object name DatabaseState. using Aspen Process Economic Analyzer. | Solution: Sometimes problems arise with existing databases, and the way to fix this problem is to delete the existing database and create a new one. To do this, the following steps must be followed:
Using Command Prompt, execute the following commands in the following order:
sqllocaldb p MSSQLLocalDB_EEV12
sqllocaldb d MSSQLLocalDB_EEV12
sqllocaldb c MSSQLLocalDB_EEV12 12.0 -s
2.After doing the above, we delete the report files of the project that is causing the problem. (Is important to do the above steps when starting the application and before opening any other project.)
Keywords: Exiting Database, Recreate Database, Report Files.
References: None |
Problem Statement: DMC3 controllers deployed from DMC3 Builder use the AspenTech Production Control RTE Service to run all processes related to the controller execution, as well as being responsible for data transfer to the Web Server, Aspen Watch Server, etc. This is different than controllers in the ACO legacy platform, which used multiple services to handle various functionalities from the controller execution. Due to this distribution of tasks, RTE Service tends to have a high consumption of resources on the APC Online Server.
As with any 32-bit program, RTE Service has a memory limitation and when it starts to get close to that limit, its behavior can be unpredictable and cause performance issues. For RTE Service that limit is around 2 GB, so when the service starts reaching memory consumption around this value or higher, the following symptoms may be observed:
After deploying or redeploying the controller, it takes a long time for it to show up on the PCWS web page list of online applications
The Deploy, Start and Stop actions respond very slowly
Deployment may fail completely with an error message Exception of type 'System.OutofMemoryException' was thrown or Failed to transfer application. If this is the case, the user can also open the program called Event Viewer and navigate to Application logs to see the System Out of Memory exception error in more detail.
The way to check if this is the cause on a system, open Task Manager > Details tab and look for memory consumed by RTEService.exe - if this is close to 2 GB or higher, it is possible to run into performance issues mentioned above. The other tasks you may see are RTEApplication.exe for individual controllers that are deployed so the user can check which application is consuming the most memory (the name of the application associated with the task can be seen in the Command Line column). | Solution: The short-term workaround when dealing with deployment failure due to System Out of Memory is to reboot the APC Online Server. This clears up some of the memory consumption, however, this can be an inconvenient workaround especially when there are other running applications on the same server that would be disrupted by the reboot. TheSolution below is to help mitigate this problem for the long term.
In some of these cases, these problems are related to the resources assigned to the server. The first step in troubleshooting this would be to check if the server specifications are in line with the recommendations provided in the Platform Specifications and in the APC Installation Guide (see Appendix C: Deployment Recommendations), which can be found on the Support site > Browse for Documentation.
Once it has been verified that the server specifications are in line with the recommendations, the following actions can be taken:
1. Make sure to apply any pending Windows Updates. Check Event Viewer to see if there are any errors associated with Windows Updates and apply them to be up to date.
2. Apply all of the latest patches for Aspen APC Desktop and APC Online available on the Support site.
3. Open the program Configure Online Server and lower the History Retention and Snapshot Retention, the minimum setting is 1 hour for each. Reboot the machine after making these changes.
4. Use the controller offset entry so that multiple controllers will execute at different times
5. During deployment, there are some temporary files created that are used for the deployment and then read by the RTE application at each controller execution cycle. Ideally, these files should automatically disappear but sometimes this process can fail, thus the files start collecting and create issues during deployment. In this case, it requires a manual clean-up using the following steps:
a) Turn Off and Stop all DMC3 applications
b) Stop the AspenTech Production Control RTE Service
c) Go to the following path C:\Windows\Temp and there may be many files with extension .TMP. Proceed to delete all older TMP files in this folder. It is advisable to sort by date and not delete any current time files to avoid possible issues.
d) Restart the RTE service and start all DMC3 applications.
If the above mitigations do not prevent performance issues, it may be advisable to install another APC Online server and distribute the applications on both servers. For example, if there are multiple controllers on an Online server and one application in particular is very large and consuming the most memory then it is better to move that application to a separate online server. In another configuration there may be too many medium-sized controllers, in which case splitting them up between two servers would help improve performance.
Please note that this is a limitation with RTE Service being a 32-bit application. It is in the future road map to convert to 64-bit application that would allow greater memory utilization.
Keywords: DMC3, RTE, deploy, fail, failure, failed, memory, system, exception, resource, consumption, TMP, temp, Files
References: None |
Problem Statement: How to extract the temperature profile inside riser of catalytic cracker model in Aspen HYSYS? | Solution: The riser sections are discretized using spline collocation. The user can see the temperature profile in the EO grid. So, if you go to the Reaction Section | EO Variables (from the main simulation environment) or the Operation | EO Variables (from the FCC environment), you can enter the following query:
*ris*vap_temp*[*
This will show the temperature profile along the length of the riser.
Also note that most of the EO variables that will show haven't been typed (UNTYPED) - they have units of F unless they show otherwise. Also, the final point of one spline is always equal to the initial point of the next spline, so there will be some duplication of information.
Keywords: Temperature, Riser, Spline, Catalytic Cracker
References: https://esupport.aspentech.com/S_Article?id=000056784 |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Line <line ID> cannot be adjusted at Horizontal Heat Exchanger, too many <type> assignments. | Solution: Either assign some lines to different nozzle types, or assign them to other equipment.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning ‘Start Point (<point>) of Pipe <Line ID> and End Point (<point>) of Pipe <Line ID> are Very Close’ | Solution: This type of error occurs when.
If the project is mode 5, make sure there are no errors stating that the line could not be adjusted.
If this project is a mode 1, the nozzle locations of both the pipes don’t have enough space for each of the lines to enter or exit. Or, a nozzle has more than one pipe assigned to it.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Line <line ID> cannot be adjusted at Air Fin, too many <type> assignments.. | Solution: Either assign some lines to different nozzle types, or assign them to other equipment.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Material <Material ID> Not Found In Material Cost Database for <Line ID> line. | Solution: This type of error occurs when.
The material specified for the pipe is not included in the material_cost data file.
The relevant material and pipe spec data files haven’t been edited to contain the material and specs being used in the project.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Branch <Line ID> size (<size>) is larger than main <Line ID> size (<size>)’ | Solution: This type of error occurs when the nominal diameter of the branch is larger than the parent line.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning The start/end point of line is outside equipment | Solution: Either assign some lines to different nozzle types, or assign them to other equipment.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Branch will bot route before parent | Solution: This type of error occurs when the branch is placed in the batch file is incorrect. Place the parent line before the child for the branch line to route in the batch file. This is done automatically with Auto Sequence during batch creation.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Start/end nozzle point is un-initialized | Solution: This type of error occurs when.
If start/end point is equipment, the equipment is not in the equipment list. Either add the ID in the equipment list or remove the line.
If start/end point is a branch, the line is not defined in the line list and is not included in the batch file. Either add the header line ID to the line list or remove the line.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: When the Input Generation process is executed users sometimes get the warning Line <line ID> cannot be adjusted at Pump, too many <type> assignments. | Solution: Either assign some lines to different nozzle types, or assign them to other equipment.
Key Words:
Aspen OptiPlant 3D Layout, Input Generation, Run Batch, Routing
Keywords: None
References: None |
Problem Statement: On certain situations, then trying to launch a GDOT application that has APC Gateway configured, the launch will fail and the application log will show “Termination request from APC Subscription.” “Terminating application”. | Solution: The cause for this error is that when launching the GDOT application, APC Gateway will try to reach the APC server along with all the variables configured on the application to check their status, but if it cannot one or more of the points, the initialization will fail. Even if just one of the variables cannot be reached the application will fail with the previously noted error. This can occur in situations like:
The APC server name changed.
The APC application got deleted.
Variables on the APC controller were renamed.
Variables on the APC controller were deleted.
The GDOT log will usually (depending on the situation) point out which variable(s) are failing, with messages such as:
“Received Bad Tag for Server APC: /Independent/FIC-2001/Measurement – Reason: Entry not found.”
TheSolution on these cases is to check that the connection between the GDOT and the APC servers is successful, as well as checking the configuration of the APC variables defined on the application, that all the variables defined on the GDOT project also exist on running APC controllers. The mapping can be found on the MV & CV Config sections of the Aspen Unified GDOT Builder.
Keywords: GDOT, Unified GDOT Builder, APC, APC Gateway, APC subscription
References: None |
Problem Statement: On DMC3 controllers running online, when doing changes on the ranks of CV limits, there is the reported issue that the controller will stop and the messages will show “Invalid rank list: error at rank X”, what is the reason behind this? What is the meaning of this error message? | Solution: The cause for this error is a conflict between the Low Limit QPType (CVLPQL), High Limit QPType (CVLPQU), and theSolution type for specific ranks.
On DMC3 Builder, when configuring the ranks for CV limits, we can select theSolution for specific ranks to either be LP (linear programming) or QP (quadratic, least squares).
On the previous screenshots (Smart Tune and traditional tuning), ranks 6, 7 & 8 will use QP when running the steady state feasibility module. Therefore, once we deploy the controller, the limits that have one of these three ranks will have CVLPQL or CVLPQU set to 1 (Yes):
The “Invalid rank list: error at rank X” occurs when, for example on the previous screenshot if we change the AI-2020 SS High Limit Rank to 6 but we leave CVLPQU on 0 (No), this will cause the conflict and turn the controller off, since rank 6 is defined to use QP but the CVLPQU parameter is saying that it is LP:
The way to avoid this problem is to make sure that CVLPQL and CVLPQU match the corresponding ranking for the high/low limit, on the example the correct way to change the SS High Rank to 6 would be to also change CVLPQU to Yes.
Keywords: LP, QP, rank, CV limit, invalid rank list, error at rank, PCWS, DMC3
References: None |
Problem Statement: Logical Device records save values regarding Cim-IO store and forward last execution, but there is no out-of-the-box mechanism to keep a log of the previous store and forward executions which can be useful to look for system behavior problems. This | Solution: provides a query defined by QueryDef or CompQueryDef that creates a text log file to save information regarding store and forward executions. Solution
1. In SQLplus create a new query and copy in the following script:
Local DeviceName CHAR(50);
Local LogFilePath CHAR(250);
LogFilePath = 'C:\Temp\'; --Change this for the path to save the log, add \ at the end
DeviceName = cast(*Activation_field as field)->name;
SET APPEND LogFilePath || DeviceName || '_SF.log';
if **Activation_field is not null then
--Wait for IO_STR_ASYNC_END to update before write log
wait 100 FOR (cast(*Activation_field as field)->IO_STR_ASYNC_END > **Activation_field);
end
--Writing log
Write '';
Write '';
Write '******************* ' || GETDBTIME || ' *******************';
Write '';
select name as Logical Device,
IO_STR_ASYNC_START as Start time, IO_STR_ASYNC_END as End time,
(IO_STR_ASYNC_END - IO_STR_ASYNC_START) as Time in store mode (+hr:min:sec)
from IoDeviceRecDef Where Name = DeviceName
group by name, IO_STR_ASYNC_START, IO_STR_ASYNC_END;
SET OUTPUT DEFAULT;
When activated by a Change Of State as configured in steps 3/4, this will update a log file with name: <logical device name>_SF.log. Change the path in the script for the log file if needed.
2. Click Record | Save As… and save this as a CompQueryDef or QueryDef record in IP.21 (for this example is saved with name “StoreForwardLog”):
3. Go to IP.21 Administrator and change #WAIT_FOR_COS_FIELDS value from 0 to the number of logical devices you want to keep a log (2 for this example):
4. Select #WAIT_FOR_COS_FIELDS repeat area and add logical devices to create the log with syntaxes: “<logical device name> IO_STR_ASYNC_START” and set COS_RECOGNITION field to “all” - see note at end of this article for a query that can achieve this:
5. Test store and forward by stopping TSK_A_<logical device name> for one of the logical devices defined on last step. A file called SFLog.log or <logical device name>_SF.log (based on the selected query step 1) should be created on the defined path on the IP.21 server after the TSK_A_<logical device name> is started again, it should show something like:
Steps 4 and 3 can be done quickly by using next query (supposing that query from step 1 was saved with name “StoreForwardLog”):
For (Select Name From IoDeviceRecDef) DO
SET EXPAND_REPEAT = 1;
INSERT INTO StoreForwardLog(WAIT_FOR_COS_FIELD, COS_RECOGNITION)
Values (Name || ' ' || 'IO_STR_ASYNC_START', 'all');
END
This query will add all logical devices as part repeat area of the CompQueryDef or QueryDef saved on step 2. Change “StoreForwardLog” for the name of your current query as saved on step 2.
Keywords: SQLplus
Store and forward
S&F
Log
References: None |
Problem Statement: What is the compatibility policy for Software License Manager (SLM) Server and Client? | Solution: The Software License Manager (SLM) License Server must be equal or higher than the SLM Client. Deploying a SLM Client version that is higher than the SLM Server is not supported. When migrating to a new version of aspenONE, upgrading the SLM Server may not be needed, but it is recommended as it will introduce new functionality and fixes from previous versions. The SLM Server should always be upgraded first, followed by the SLM Clients. This is done to avoid any potential incompatibility issues.
The table below shows the compatibility between SLM Server and SLM Client delivered with aspenONE version media.
aspenONE Version SLM Server
Version SLM Client built SLM Server* SLM Server* SLM Server* SLM Server* SLM Server*
9.6.2 9.2.1.1606 8.6.1 8.5.3 8.4.0.900
V14 9.6.2 2022.14.0.715 YES Will not work Will not work Will not work Will not work
V12.2 9.6.2 2020.12.2.622
(SLM 9.6) YES Will not work Will not work Will not work Will not work
V12/V12.1 9.6.2 2020.12.0.610
(SLM 9.6) YES Will not work Will not work Will not work Will not work
V11/V11.1 9.2.1.1606 2018.11.1.508 (SLM 8.6) YES YES YES Not Supported Will not work
V10 SLM Patch Build 1606 9.2.1.1606 2017.0.1.419 (SLM 8.6) YES YES YES Not Supported Will not work
V10/V10.1 8.6.1 2017.0.1.414 (SLM 8.6) YES YES YES Not Supported Will not work
V9.0/V9.1 8.6.1 2016.0.1.378 (SLM 8.6) YES YES YES Not Supported Will not work
V8.8 8.5.3 2015.0.1.357 (SLM 8.5) YES YES YES YES Not Supported
V8.7 8.4.0.900 2014.0.1.344 (SLM 8.5) YES YES YES YES Not Supported
V8.6 8.4.0.900 2013.0.1.332 (SLM 8.4) YES YES YES YES YES
V8.5 8.4.0.900 2013.0.1.332 (SLM 8.4) YES YES YES YES YES
V8.4 8.4.0.900 2013.0.1.326 (SLM 8.4) YES YES YES YES YES
V8.3 8.4.0.900 2012.0.1.306 (SLM 8.4) YES YES YES YES YES
V8.2 8.4.0.900 2012.0.1.306 (SLM 8.4) YES YES YES YES YES
V8.1 8.4.0.900 2012.0.1.306 (SLM 8.4) YES YES YES YES YES
V8.0 8.4.0.900 2012.0.1.304 (SLM 8.4) YES YES YES YES YES
V7.3 8.4.0.900 2010.0.1.287 (SLM 8.4) YES YES YES YES YES
8.4.0.900 2010.0.0.287 (SLM 7.3) YES YES YES YES YES
V7.2 8.4.0.900 2009.0.1.273 (SLM 8.2) YES YES YES YES YES
8.4.0.900 2009.0.0.273 (SLM 7.3) YES YES YES YES YES
V7.1 8.4.0.900 2009.0.1.265 (SLM 8.2) YES YES YES YES YES
8.4.0.900 2009.0.0.265 (SLM 7.3) YES YES YES YES YES
V7.0 8.4.0.900 2008.0.1.258 (SLM 8.1) YES YES YES YES YES
8.4.0.900 2008.0.0.258 (SLM 7.3) YES YES YES YES YES
How to Read the Compatibility Matrix
Scenario 1: If running aspenONE V12 (SLM Client Tools version 9.6) and SLM Server version 8.5.3, it will not work. The SLM Client Tools are at a higher version than the SLM Server.
Scenario 2: If running aspenONE V12 (SLM Client Tools version 9.6) and SLM Server version 9.6, it will work. The SLM Client Tools are at the same version as the SLM Server.
Scenario 3: If running aspenONE V11 (SLM Client Tools version 8.6) and SLM Server version 9.6, it will work. The SLM Server is at a higher version than the SLM Client Tools.
For instructions on how to find the SLM Server version, refer to KB 000072624
Keywords: SLM
Compatibility
Coexistence
Sentinel version
References: None |
Problem Statement: It may be desired to change the URL the Aspen Mtell Alert Manager shortcut points to, such as in the case that an alias is being used or the path has been modified. This article shows the steps to change the shortcut URL. | Solution: The Alert Manager shortcut is created when you install Alert Manager on a server, so these steps should be done on the Alert Manager server.
Click the Windows icon to open the Start menu
Locate and right click on Aspen Mtell Alert Manager
Select Open file location
In the File Explorer, right click on the shortcut Aspen Mtell Alert Manager
Select Open file location
Copy APMUIShortcutCur
Paste the file in another location, such as your Desktop
Right click on the pasted file and select Properties
Type your desired URL in the box
Click OK
Go back to the Frontend folder from Step 6 and rename the old shortcut
Copy the modified shortcut with the new URL
Paste it into the Frontend folder
You will most likely be prompted to provide administrator permission
Open the Start menu and click on the Aspen Mtell Alert Manager icon. It should now take you to the URL you entered in Step 9.
You can delete APMUIShortcutCur from the location you copied it to in Step 7
Keywords: Aspen Mtell Alert Manager
Alert Manager Shortcut
Alert Manager Alias
Alert Manager URL
Alert Manager Application Path
References: None |
Problem Statement: Is it possible to have a Visual Basic Macro inside an Excel Calculator block?
This used to work, but now, the Excel Calculator no longer opens with the message that Microsoft Office has identified a potential security concern. Microsoft has blocked macros from running because the source of this file is untrusted. without any option to Enable Macros:
Changing options in the Trust Center in Excel to Enable VBA macros does not help. | Solution: Macros inside the Excel Calculator pose a security threat, so they are not allowed.
In earlier versions of Excel, it was possible, but now Excel Version 2209 and higher does not allow the Calculator to open in Excel if there are Macros in it.
Keywords: None
References: : VSTS 802619 |
Problem Statement: New features for Aspen Production Execution Manager (APEM) for V10, V11 and V12. | Solution: Please find the attached presentation which explains the new features with APEM application with V10, V11 and V12 .
Also find the V12 release notes PDF attached .
Keywords: None
References: None |
Problem Statement: You may find a message -nan(ind) message displayed in the non-linearity ratio column when running a case or a parametric analysis. | Solution: -nan(ind) means there is a mathematical error such as ln(0), x/0 and so on. This could be included accidently by the users on the Non-Linear Equations (NLE). For example, for a NLE with logarithmic terms it could be working fine with an inputSolution. However, if a parametric analysis is ran, random inputs will be used and therefore a 0 could be picked as a starting point.
To solve this, make sure to set proper bounds when incorporating a NLE into the model.
Keywords: Non-Linear Equations, NLE, parametric analysis, -nan(ind), NonLinearity ratio column
References: None |
Problem Statement: I created an Aspen Unified Snapshot to share it. However, when I opened it, it didn’t have the latest changes. Why is that? | Solution: When a user opens a model workspace, it opens a sandbox which only belongs to the current user. When you create a snapshot inside your model workspace, it is based on the sandbox. Hence, the snapshot will have the latest changes.
But if you create a snapshot on the home page, it will have the last checked-in version which is visible to all users. If you create a check-in version of the model, the snapshots created on the workspace and the homepage will be the same.
You, therefore, need to check-in the model first and then create the snapshot from home page or create it from inside your workspace if you want to see the latest version of the model.
Keywords: Model Life Cycle Management, MLCM, sandbox, snapshot, AUP, AUS.
References: None |
Problem Statement: AUP does not show the wanted Units Of Measure (UOM) | Solution: Go to the model in which you want to change the UOM. Select the “Settings” Button on the left Menu, then General Settings.
Select the “UI Behavior” tab and go to the “Units of Measure” section. Select the desired UOM from the drop menus. You can change all the UOM by changing the “Unit set” or a particular one by choosing from the below list. Should you want to reset to the default settings, you will see a blue button on your top right corner.
Keywords: Units Of Measure, UOM, AUP, Aspen Unified PIMS
References: None |
Problem Statement: When I try to access the Aspen Unified Home Page, it won’t stop loading. | Solution: The most common reason is firewall protection. Make sure that the user’s machine and license servers that have the DB, license and app should have port 5093 UDP open for communication.
Keywords: Aspen Unified, loading, firewall protection
References: None |
Problem Statement: My organization has very strict cyber security protocols. Hence, I have been experiencing some problems when using Aspen Unified PIMS, Scheduling or GDOT because some firewall protection is blocking the communications between the App, Database and License Servers. Which ports should I have open for communication for AU to work correctly? | Solution: The user’s machine and servers that have the DB, license and app should have port 5093 UDP open for communication to access the license.
A port for master/slave connection 9750 is needed between the primary and secondary AUP server.
To access HTTP, you should have port 80 open for communication.
To access HTTPS, you should have port 443 open for communication.
To access the DB, you should have your SQL port open for communication. That is by default server port TCP 1433 and UDP 1434, but it is sometimes changed for security purposes.
Finally, if you have Active Directory open the next ports:
UDP&TCP 135
UDP&TCP 389
UDP&TCP 434
UDP&TCP 53
UDP 88
TCP 139
UDP 138
UDP 445
UDP&TCP 3269
Keywords: Port, cyber security, AUP, AUS, GDOT, firewall, http, https
References: None |
Problem Statement: How to solve the AI Training access error “When Aspen OnLine V12.1 service or Aspen OnLine Interactive Service V12.1 is running, simulation case cannot be located under the user specific folder”? | Solution: First principles driven hybrid models combine mechanistic models with machine learning, it has powerful features that can enhance an existing first-principles model performance by augment using AI with data from operations. From V12.1, users can apply this feature in Aspen Plus/HYSYS as shown below:
Some users may occasionally see below a warning message when access AI Training after fresh installation:
This is because the AI Training feature uses Aspen OnLine and when Aspen OnLine V12.1 service is running with a user account log on, it is not allowed to access this feature when simulation file is located in a user specific folder (for example, C:\Users\Aspen\Desktop\Aspen Hybrid Models folder) - this is a limitation imposed by Windows/Microsoft. Users can check their log on by opening Services and then right clicking the Aspen OnLine V12.1 and selecting Properties:
Please do the following to resolve this issue to access AI training:
1. One simple way is copy simulation file into a non user specific folder such as C:\... drive directly.
2. Another way is to change AOL V12.1 service to run as Local System Account.
Go to the Services window and look for Aspen Online Service V12.1
Right-click and choose properties
Go to the 2nd tab named “Log on” and select the first option Log on as “Local System account”
Restart the Service if it is currently running when making changes.
Please also install Aspen Online V12.1 Emergency Patch 1 from the link below:
https://esupport.aspentech.com/apex/S_SoftwareDeliveryDetail?id=a0e4P00000QDW3VQAX
Please note that the suggestions in this article will not apply if you install SQL Express and run the AOL v12.1 Service as Local System. The patch no longer works and the only option for users to store files in user specific folders is to do the following:
1) Be logged in as Local Admin user
2) Use that Local Admin account to run the Aspen Online service
3) Install Aspen Online v12.1 Emergency Patch 1
Key Words
AI Training, First Principle Driven Hybrid Model, Hybrid Modeling
Keywords: None
References: None |
Problem Statement: From the Aspen APC Web Viewer (PCWS), when the user clicks to open the controller details page, entry dictionary, or any context menu, a dialog box opens with the following error message:
This operation has been cancelled due to restrictions in effect on this computer. Please contact your system administrator. | Solution: This issue can occur in the scenario where the PCWS web page is being accessed on an operator workstation that does not allow pop-ups, so when the user clicks on a header, the pop up is blocked.
To resolve this issue, go to the Preferences tab in PCWS, check the box for Enforce single-window environment, and click Apply. Then try opening the window again. This will allow dialog boxes and detail pages to be displayed in embedded window spaces, within the main window display, instead of as a separate pop-up window.
Please note that the settings changed under preferences are user-specific so you may need to apply the change for all users logging into the client machine here.
Keywords: pcws, restrictions, error, pop-up, window, dialog
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.