question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: There is a need to install or register the 32-bit SQLplus ODBC driver on the client machine. However, this needs to be done without having to install all the client applications.
Solution: Below are the steps which will allow the registration of the 32-bit SQLplus ODBC driver on the client machine. The files can be copied from the Aspen InfoPlus.21 server. Copy ip21odbc.dll from C:\Windows\SysWOW64 in server to C:\Windows\SysWOW64 in client. Copy libc21.dll and PFWAuth.dll from C:\Program Files (x86)\Common Files\AspenTech Shared in server C:\Program Files (x86)\Common Files\AspenTech Shared in client. Add C:\Program Files (x86)\Common Files\AspenTech Shared to the PATH system variable. Browse to C:\Windows\SysWOW64 in Windows Explorer. Right-click on cmd.exe. Select Run as administrator from context menu. Issue following command in command prompt to change folder location. cd to c:\Windows\SysWOW64 Register the AspenTech SQLplus driver using following command. odbcconf.exe INSTALLDRIVER AspenTech SQLplus|Driver=C:\Windows\SysWOW64\ip21odbc.dll Reboot. *Note, if you are installing the driver on a client machine where C:\Program Files (x86)\Common Files\AspenTech Shared is already included in the PATH system variable, you do not need to reboot. Adding the file path to the registry requires a reboot. Keywords: None References: None
Problem Statement: There is a need to install or register the 64-bit SQLplus ODBC driver on the client machine. However, this needs to be done without having to install all the client applications.
Solution: Below are the steps which will allow the registration of the 64-bit SQLplus ODBC driver on the client machine. The files can be copied from the Aspen InfoPlus.21 server. Copy ip21odbc.dll from C:\Windows\System32 in server to C:\Windows\System32 in client. Copy libc21.dll from C:\Program Files\Common Files\AspenTech Shared in server to C:\Program Files\Common Files\AspenTech Shared in client. Add C:\Program Files\Common Files\AspenTech Shared to the PATH system variable. Browse to C:\Windows\System32 in Windows Explorer. Right-click on cmd.exe. Select Run as administrator from context menu. Issue following command in command prompt to change folder location. cd C:\Windows\System32 Register the AspenTech SQLplus driver using following command. odbcconf.exe INSTALLDRIVER AspenTech SQLplus|Driver=C:\Windows\System32\ip21odbc.dll Reboot. *Note, if you are installing the driver on a client machine where C:\Program Files\Common Files\AspenTech Shared is already included in the PATH system variable, you do not need to reboot. Adding the file path to the registry requires a reboot. On Windows 2016 Server, it requires appropriate Microsoft Visual C++ Redistributable (x64) otherwise System Error Code 126 will be observed when using the Microsoft ODBC Administrator. Note, these redistributables* (called vcredist_x64.exe) can be found on the installation media in folder: \aspenonemscdvd\core * SQLplus ODBC driver V9.0-V10.0 = folder vcredist_x64_VS2013SP1, SQLplus ODBC driver V11.0+ = folder vcredist_VC2017 Keywords: None References: None
Problem Statement: How do you create pressure-enthalpy (PH) curves?
Solution: There is no built-in PH envelope similar to PT-envelope. However, it is possible to use Generic Analysis or a Sensitivity block that draws a Pressure-Enthalpy Diagram. The attached example will run in V11 and higher. To see the PH Diagram, go to Sensitivity/Results. Then select enthalpy as the X variable, pressure as the y variable, temperature as the parametric variable, and then display the plot. Keywords: None References: None
Problem Statement: The Cim-IO patch release notes do not go into a lot of detail of which services need to be stopped when installing patches on a machine that has both APC components as well as Cim-IO, if the required services are not stopped the patch installation will fail.
Solution: For a Cim-IO patch to be correctly installed, all of its client services must be stopped, if one of these is running it will lock the Cim-IO files since they are in use and the patch installation will fail, showing an error on the log similar to this one: 2023-05-29 22:17:06.890 +00:00 [Error] Exception while installing file 'cimio.dll'. Exception: The process cannot access the file 'C:\Program Files (x86)\Common Files\AspenTech Shared\cimio.dll' because it is being used by another process.. Please close all aspenONE applications and rerun the EP Installer. Here is a list of the services that need to be stopped, depending on which components are installed on the server (additional to the ones mentioned on the Cim-IO release notes such as Cim-IO Manager, InfoPlus.21, Process Simulator, etc.): Aspen APC Online server V12.1 or older: ACO Utility Server Aspen APC DMCplus Data Service DMCplus Context Service Aspen APC Inferential Qualities Data Service AspenTech Production Control RTE Service V14 or newer: Aspen APC ACO Utility Server Aspen APC DMCplus Context Service Aspen APC DMCplus Data Service Aspen APC Inferential Qualities Data Service Aspen APC Production Control RTE Service NOTE: Stopping these services will stop the controllers that are running on the server, so if you are going to apply a Cim-IO patch make sure to properly follow the shutdown procedure and inform the operations personnel of this temporary shutdown. Also while the services are stopped, the controllers will temporarily disappear from the APC Web interface. Aspen APC Watch server Aspen APC Performance Monitor Data Service After the patch is successfully installed, you can reboot the machine (if required) or simply restart the services and continue operations as usual. Remember to check the Cim-IO Interface Manager to make sure that the interfaces are running and that the connection to the OPC / InfoPlus.21 database is successful. Keywords: APC, Cim-IO, cimio, patch, install, service References: None
Problem Statement: This knowledge base article illustrates possibilities to change the format of the time or date displayed in the dmcpmanage command.
Solution: Some users have requested to change the format of the date or time displayed in the dmcpmanage command from the current version (V12) to the format used in the old version (V8.4). The current version displays additional details such as the day and other information MMM DD HH:MM:SS YYYY, while the old version displays the time in the format of MM/DD/YY HH:MM:SS. V12 View: V8.4 View: After reviewing the request from the customer, the answer is that it is not possible to change the format output of the manage list command for the date time string. The format is determined by the locale settings on the system and is not configurable for the manage output. Keywords: dmcpmanage command, date format, time format References: None
Problem Statement: Custom agents were released in Aspen Mtell V14.0.1 and allow users to create agents based on custom models they develop. Custom agents require that the model must expose a well-defined web end point that can be called as an API service. This article provides an example script and gives instructions on how to spin up an API service using the Flask web framework for one of these custom agents. The example script is not meant to be monitored as a live agent in Aspen Mtell and is solely provided as an example to guide you in formatting your own custom models. Documentation on the Flask framework can be found at the following link: https://flask.palletsprojects.com/en/2.3.x/
Solution: Install Python on the machine where you want to host the API. It does not have to be the Mtell server, but the Mtell server should be able to access the API host machine. Python can be downloaded here: https://www.python.org/downloads/ Select the checkbox to Add python.exe to PATH If disable path length limit appears at the end of installation, select that option. Restart the computer, so the path can take effect Open a command prompt as an administrator Install flask with the command: pip install flask Install wfastcgi with the commands: pip install wfastcgi wfastcgi-enable Copy this directory and save it for later Install any other modules used by your project Next, you’ll need to set up an IIS website Create a folder in File Explorer to hold the files for your website. C:\inetpub\wwwroot is the IIS root directory. Paste the attached web.config as well as the .py file with your model in this folder. flaskapp.py is attached as an example. Replace the scriptProcessor in web.config with the directory the command prompt gave you in step 2b. Replace flaskapp.app with “name of your .py file”.app For example, flaskapp.py becomes flaskapp.app Replace C:\inetpub\wwwroot\FlaskApp with the path to the folder that holds your website’s files. Run IIS as an administrator Go to Connections and expand the tree Select “Sites” Select “Add Website” under actions panel on the right of the window. A new window will pop up titled “Add Website”. Fill in the necessary info: Site name, directory containing the website content, IP address, and port. Return to the node with the name of your server. You might need to restart the IIS server. This is in the Actions section on the right. With the server node selected, go to FastCGI Settings and handler Mappings. Check that the following settings show up. Under FastCGI settings: Select the website you created in IIS and the select handler mappings. This setting will show up: Go to the Folder where python is installed. If you are not sure where it is, you can use a command prompt and run the command: where python Right click on the folder and go to Properties. Select the Security tab and click edit under Group or usernames. Click on Add. If your computer is attached to a domain. You need to click on “Locations…” and change to the name of the computer at the top of the list. And select OK. Then type in the box iis apppool\[The name of the website] Click OK. To access your API, you can use the following URL http://hostname:port/flaskapp Replace hostname and port with that of your IIS website in step 10. flaskapp can be replaced with the method you’re calling if you’ve written your own model. Your custom model can now be called as an API. Set up and test a custom agent from within System Manager. See this KB for details. See below for a summary of the provided example model. Input Variables: x1 x2 Parameters: factor Output Variables: y1 – this variable is equal to x1 + factor*x2 y2 – this variable is equal to x2 + factor*x1 JSON Request Format: { inputVariables: [ {name: x1, values: [5.8]}, {name: x2, values: [70]} ], parameters: [ {name: factor, value: 1} ] } Keywords: Custom model Custom model API Custom agent Python agent Flask References: None
Problem Statement: Custom agents were released in Aspen Mtell V14.0.1 and allow users to create agents based on custom models they develop. These agents run via an external service but use the data collected by Aspen Mtell and run in the same processing cycles as other Aspen Mtell agents. This article gives instructions on how to deploy one of these custom agents.
Solution: Develop your model. Put your model online using a web interface on a computer accessible to Aspen Mtell. The specifications for this interface are included in the Custom Model API KB article here. For tutorials on how to put a custom model online, please see the following KB articles Jupyter Kernel Gateway Flask Bottle In Aspen Mtell System Manager, on the Equipment tab, click an asset, then click Custom Agents, and then in the ribbon click New. When you click Custom Agents, a list of existing custom agents for that asset appears to the right and you can click one of those agents and then Edit in the ribbon to edit it. In the Custom Agent Configuration Wizard, specify a name and optionally a description for the agent. Then add the sensors the agent needs by adding them from a sensor group, as individual sensors, or by defining new calculated sensors using the Rule Editor. For each sensor, specify a Variable Name; this is the name this variable is called when it is passed to your model. You can click the X under Actions to remove a sensor. Click Next when you have selected all the sensors and specified their names. In step 2, enter the names of the output variables from your model. The names you should enter are the names the model returns through the interface as described above. Click Next when you have entered all the names. In step 3, specify the URL End Point where your custom model was put online above. If necessary, you can change the Locale setting (which determines the decimal separator used in writing real numbers). Also define the parameters needed by your custom model. These are also model inputs, but rather than coming from a sensor, they take fixed values that are always the same each time this agent is called (but could be different for other agents based on the same custom model). Click Next when you have entered all the parameters and their values. In step 4, specify Agent Execution Frequency, the interval at which the agent is run, which are based on the three different processing threads. Also specify the Corrective Steps to Perform, which are made available with alerts generated by the agent. Optionally, you can specify an offline condition using the Rule Editor. If the asset is offline, the agent will not be called. Click Next. In step 5, specify the thresholds for the agent to issue an alert. If Low Threshold is specified, the agent alerts if the value returned by the custom model falls below this value. If High Threshold is specified, the agent alerts if the value from the custom model is above this value. You do not have to specify a threshold for each output variable, but you have to specify at least one threshold for at least one output variable. You can optionally specify the minimum alert duration, and to send email notifications (and select the template for those emails). Also specify the severity for the alerts issued by this agent. When you are done, click Finish. You should test your custom agent to ensure it is configured correctly. In Aspen Mtell System Manager, on the Equipment tab, click the asset, then click Custom Agents. In the list of custom agents, right-click the agent name and click Test. In the Test screen that appears, enter values for the input variables and parameters defined in your model, then click Test. The values returned by the service for your custom model are shown in the Outputs section, and the raw server response is shown in the Server Response section, which may be useful for debugging your model. This test can help you determine that your custom model service is working properly, that the URL End Point is specified correctly, and that the custom model returns the outputs expected for your specified inputs. To deploy the agent, in Aspen Mtell System Manager, on the Equipment tab, click the asset, then click Custom Agents. In the list of custom agents, right-click the agent name and click Enable. It will be included in the next agent processing cycle, and it will appear in the Live Agents tab in Aspen Mtell Agent Builder, where you can see the trends of all output variables from the agent. From the list of custom agents, you can also clone the agent, add notifications, and other actions available for most agent types. In the panel to the right of the custom agent list, you can see the detailed settings for the selected custom agent. Some of these settings are editable for this panel. Keywords: Custom model Custom agent Deploy custom agent References: None
Problem Statement: Custom agents were released in Aspen Mtell V14.0.1 and allow users to create agents based on custom models they develop. These agents run via an external service but use the data collected by Aspen Mtell and run in the same processing cycles as other Aspen Mtell agents. This article details the requirements for a custom model and its web interface.
Solution: Model Authoring This section describes the specifications models must follow to work as custom agents in Aspen Mtell. A model can be written in any language. A model must expose a well-defined web end point that can be called as an API service. Aspen Mtell sends data to the model using the JSON structure defined below. The model must be capable of unpacking this structure to access the data. Aspen Mtell encodes numeric values into strings using the locale defined in the custom agent and sends these strings to the custom model in this JSON structure. The custom model is expected to convert the strings back to numbers in order to use them. A model must send results back to Aspen Mtell using the JSON structure defined below, encoding numeric values as strings. Aspen Mtell interprets these values in the results using the locale specified for the agent. A model is expected to implement its own unit-of-measure conversions if needed. Request Payload JSON Structure The JSON request which Aspen Mtell sends to the custom model API contains two array objects, inputVariables and parameters. The inputVariables array contains one or more variables. Each variable is an object with name and values attributes. The name attribute is a string. The values attribute is an array. In this version, the array always contains one value. Multi-element inputs are not supported. The parameters array contains zero or more parameters, each with name and value attributes. The values are sent as strings as they were entered in the wizard. Values are encoded into the locale specified during agent configuration. Below is an example JSON request payload with three input variables and two parameters. The first parameter is a string while the second is a number. { inputVariables: [ {name: T-100, values: [5.8]}, {name: flowrate, values: [70]}, {name: vibration, values: [0.5]} ], parameters: [ {name: loc, value: Boston}, {name: X, value: 55.8} ] } Response Payload JSON Structure The custom model must return a JSON response which contains an outputVariables object. This object should contain one or more variables, each with name and values attributes. The values attribute is an array. Other objects may optionally be included in the payload. The full JSON payload is displayed in the Server Response box in the custom agent Test screen, so these optional objects can be useful in testing. Below is an example JSON response structure with two output variables and a status object that holds a message. { outputVariables: [ {name: UA, values: [70585.3]}, {name: Q-100, values: [11000]} ], status: {message: Success} } Model Hosting This section covers hosting and execution environments for custom models. Web Servers You can use any hosting environment you prefer. Aspen Mtell puts no constraints on this and only accesses the web API as described above. For models written in Python, there are several lightweight Python web servers and frameworks that could be used to host the model, including Bottle, Flask, Django, FastAPI, and Tornado. Each of these comes with extensive documentation on how to expose a model as an API service. Jupyter Kernel Gateway is a framework that allows you to use a Jupyter Notebook as an API service. These KB articles show how to deploy a custom model using Bottle, Flask, and Jupyter Kernel Gateway. For scale, you may host your models on full-blown web servers such as IIS, Apache, or nginx. Each of these servers can be configured to execute Python scripts that are called via an exposed endpoint. Service APIs written in languages other than Python can also be hosted on these servers. Models can also be containerized and hosted by a container engine such as Docker or a container orchestration system such as Kubernetes. Runtime Environment When using Python you must ensure the appropriate Python environment is available on the host machine. The standard way to do this with Python is via a virtual environment. If your model calls out to other applications, for example Aspen HYSYS, for some of its calculations, you must ensure those applications are correctly deployed and reachable by your model. Keywords: Custom model Custom agent Custom model requirements Custom model API Custom model web interface References: None
Problem Statement: Custom agents were released in Aspen Mtell V14.0.1 and allow users to create agents based on custom models they develop. Custom agents require that the model must expose a well-defined web end point that can be called as an API service. This article provides an example script and gives instructions on how to spin up an API service using the Bottle web framework for one of these custom agents. The example script is not meant to be monitored as a live agent in Aspen Mtell and is solely provided as an example to guide you in formatting your own custom models. Documentation on the Bottle framework can be found at the following link: https://bottlepy.org/docs/dev/
Solution: Install Python on the machine where you want to host the API. It does not have to be the Mtell server, but the Mtell server should be able to access the API host machine. Python can be downloaded here: https://www.python.org/downloads/ Select the checkbox to Add python.exe to PATH If disable path length limit appears at the end of installation, select that option. Restart the computer, so the path can take effect Open a command prompt as an administrator Install bottle with the command: pip install bottle Install wfastcgi with the commands: pip install wfastcgi wfastcgi-enable Copy this directory and save it for later Install any other modules used by your project Next, you’ll need to set up an IIS website Create a folder in File Explorer to hold the files for your website. C:\inetpub\wwwroot is the IIS root directory. Paste the attached web.config as well as the .py file with your model in this folder. bottleapp.py is attached as an example. Replace the scriptProcessor in web.config with the directory the command prompt gave you in step 2b. Replace bottleapp.wsgi_app() with “name of your .py file”.wsgi_app() For example, bottleapp.py becomes bottleapp.wsgi_app() Replace C:\inetpub\wwwroot\BottleApp with the path to the folder that holds your website’s files. Run IIS as an administrator Go to Connections and expand the tree Select “Sites” Select “Add Website” under actions panel on the right of the window. A new window will pop up titled “Add Website”. Fill in the necessary info: Site name, directory containing the website content, IP address, and port. Return to the node with the name of your server. You might need to restart the IIS server. This is in the Actions section on the right. With the server node selected, go to FastCGI Settings and handler Mappings. Check that the following settings show up. Under FastCGI settings: Select the website you created in IIS and the select handler mappings. This setting will show up: Go to the Folder where python is installed. If you are not sure where it is, you can use a command prompt and run the command: where python Right click on the folder and go to Properties. Select the Security tab and click edit under Group or usernames. Click on Add. If your computer is attached to a domain. You need to click on “Locations…” and change to the name of the computer at the top of the list. And select OK. Then type in the box iis apppool\[The name of the website] Click OK. To access your API, you can use the following URL http://hostname:port/bottleapp Replace hostname and port with that of your IIS website in step 10. bottleapp can be replaced with the method you’re calling if you’ve written your own file. Your custom model can now be called as an API. Set up and test a custom agent from within System Manager. See this KB for details. See below for a summary of the provided example model. Input Variables: x1 x2 Parameters: factor Output Variables: y1 – this variable is equal to x1 + factor*x2 y2 – this variable is equal to x2 + factor*x1 JSON Request Format: { inputVariables: [ {name: x1, values: [5.8]}, {name: x2, values: [70]} ], parameters: [ {name: factor, value: 1} ] } Keywords: Custom model Custom model API Custom agent Python agent Bottle References: None
Problem Statement: After executing Write Key Routine in Fidelis which launches VSTA, you may come across the following pop-up: This window may consistently appear with every launch of VSTA through Fidelis, even if the do not ask again or the targeted pack is downloaded which can be troublesome.
Solution: This window indicates a newer version of the .NET Framework needs to be installed. Open Control Panel > Right-Click Microsoft Visual Studio Installer > Change Click Modify for the proper VSTA version for the installed Fidelis version (ex. VSTA 2019 for Fidelis V14) Select .NET desktop development > .NET Framework 4.8 (or the version referenced in pop-up) development tools > Modify The installer will then install the new components, and the pop-up should no longer appear after launching VSTA in Fidelis Keywords: change net framework Fidelis References: None
Problem Statement: The user gets an “Http Error 503. The service is unavailable” error message when trying to navigate to Aspen Mtell View.
Solution: This error message in Mtell View is usually caused by the Mtell View application pool being stopped. To troubleshoot this issue, please try each section to address this issue. Restart the Application Pool Check the Application Pool Private Memory Limit Restart the Application Pool You can confirm if the application pool is stopped and restart it using the following steps. Click on your Windows button and type iis to open your IIS Manager Expand your server name and then click Application Pools Under Application Pools find Aspen Mtell View and check the status If the status is Stopped, right click on the application pool and click Start After restarting the application pool, try navigating to Mtell View again Check the Application Pool Private Memory Limit If Mtell View begins to work after restarting the application pool, but repeatedly gives the same error after some time, it may be configured to have a Private Memory Limit. You can check Event Viewer to see if a Private Memory Limit is causing the application pool to stop. Click on your Windows search button and type event viewer to open the Event Viewer Expand Windows Logs and click on System You should see several Information level messages from the WAS source with the message “A worker process serving application pool 'AspenMtellView' has requested a recycle because it reached its private bytes memory limit.” You should also see an Error level message from the WAS source with the message “Application pool 'AspenMtellView' is being automatically disabled due to a series of failures in the process(es) serving that application pool.” Follow these steps to check if a Private Memory Limit is configured and remove it. Click on your Windows button and type iis to open your IIS Manager Expand your server name and then click Application Pools Under Application Pools find Aspen Mtell View, right click on it, and select Advanced Settings Scroll down to the bottom and look for the field Private Memory Limit If the field is not set to 0, a memory limit is configured Remove the memory limit by setting Private Memory Limit to 0 Click OK and restart the application pool Keywords: Mtell View Error HTTP 503 Service Unavailable Application Pool Stopping References: None
Problem Statement: Users may want to make use of Aspen Process Data Excel add-in, but would prefer to avoid installing the thick client application Aspen Process Explorer. KB000069660 introduces to add http://webservername/Web21/DownloadAddin.asp in the graphic so that the A1PE user can access the url to install Aspen Process Data Excel add-in. This article gives you a step by step procedure on how to install excel add-in via aspenONE Process Explorer Server.
Solution: Before you install Aspen Process Data Excel add-in, Please make sure that you have installed all the Prerequisites. You can go to https://www.aspentech.com/en/platform-support and check the prerequisites. Prerequisites for V12: Excel (2013 32-bit, 2016 32-bit/64-bit, 2019 32-bit/64-bit) Microsoft .NET Framework 4.8 1- Logon A1PE from the client machine , navigate to the graphic which includes the link: http://webservername/Web21/DownloadAddin.asp 2- Click on “Download and install ExcelAddinSetup.exe” 3- Run ExcelAddinSetup-<A1PE server hostname>.exe Note: If there is any issue to download the file, please copy the file at C:\inetpub\wwwroot\AspenTech\AspenCUI from A1PE server to the client machine. Run ExcelAddinSetup-<A1PE server hostname>.exe as administrator to install it. 4- Configure Date Source a. Launch Excel | go to “Aspen Process Data” | click on “Tag Browser” b. Click on view | option c. Click on “MES Web Server” tab | Check “Use MES Web Server“ | type the WebServer | click on OK You may get windows Security popup window. Type the domain username and password | click on OK d. Double click on the data source and ensure it is connected e. Search for a tag name 5- Get current value a. Click on “Current Value” | New if data source is empty , please go to C:\Program Files\Common Files\AspenTech Shared\ExcelAddIn for 64-bit excel C:\Program Files (x86)\Common Files\Aspentech Shared\ExcelAddIn for 32-bit excel Open ExcelAddin.xml , change webhost to the IP address of A1PE server. b. Close excel and reopen | Click on “Current Value” | New again, you should be able to see the data source this time c. Drag and drop a tag from tag browser | click on apply, you should be able to get the current value of the tag. Note: If you keep getting windows security popup, please do the following on the client machine. a. Go to “internet options” b. Go to security | “Trusted sites” | click on Sites | add both IP address and hostname of A1PE server. c. Click on “Customer Level…” d. Select “Automatic logon with current user name and password” | click on OK | click on Yes | Click on OK Key words: Excel add-in Web21 DownloadAddin.asp ExcelAddinSetup.exe Keywords: None References: None
Problem Statement: The Aspen Update Agent fails to install Aspen Cim-IO V12.2 after successfully installing Aspen Cim-IO V12 from the V12.2 installation media. The other updates for Aspen Cim-IO Interfaces, Process Data and Common Components installs successfully. The installation log file terminates with an access denied error message: hh-mm-ss Starting aspenONE Update Agent. hh-mm-ss Getting local patches... hh-mm-ss Working directory: path\Patches hh-mm-ss New Patch Found: aspenONE CIM-IO V12.2 hh-mm-ss Done Getting local patches... hh-mm-ss **Processing patch 1 of 1 hh-mm-ss Checking disk space hh-mm-ss Disk space available: nnnnnnnn hh-mm-ss Disk space needed for patch: 21591072 hh-mm-ss Extracting patch... aspenONE CIM-IO V12.2 hh-mm-ss Done extracting patch... hh-mm-ss Installing patch... hh-mm-ss Done Installing patch... hh-mm-ss ************************* hh-mm-ss Patch install failed: hh-mm-ss Please refer to the log file in the %temp% directory for more information and look for the log file starting with the name: hh-mm-ss ************************* hh-mm-ss Total Number of patches installed successfully: 0 hh-mm-ss Error: Access to the path 'setfilestamp.exe' is denied. The Application Event log records a Warning from MsiInstaller: Event ID: 1015, Failed to connect to server. Error: 0x80004002
Solution: Copy into a temporary folder (assumed to be C:\temp\) the AspenCim-IO_V12_CP2.exe file located in the Patches folder of your download media (\aspenONEMedia\Patches\) Open a command prompt window (Run as Administrator) and extract that AspenCim-IO_V12_CP2.exe file into the temporary folder by using the following commands: > CD c:\temp > AspenCim-IO_V12_CP2.exe /e /f c:\temp Note, the /e means extract only and suppresses the execution of the embedded setup procedure. /f is used to specify the target folder and must be provided - you can use the same folder Don't close the command prompt window, we will use it again later. Download Cimio_installpatch.txt from this knowledge base article's attachments area Rename the downloaded file so it has vbs file extension, eg. it should now have filename Cimio_installpatch.vbs Use File Explorer to move Cimio_installpatch.vbs into the same folder as the previously extracted files, ie. move file into C:\temp Open C:\temp\Cimio_installpatch.vbs file in a text editor and ensure that the msp filename is correct (likely to already be correct, AspenCim-IO_V12_CP2.msp). Save any changes. In the command prompt window (which has remained open and has administrator privileges), run the Cimio_installpatch.vbs file, eg. > Cimio_installpatch.vbs You should see a dialog appear that will show that Aspen CIM-IO is being installed and configured. No errors are expected. You will then be told that you must restart your system for the configuration changes made to Aspen CIM-IO to take effect. Please do so at a convenient moment. If errors persist during this process then please contact AspenTech technical support. Additional note This workaroundSolution has proven successful with similar issue seen when attempting to use the Aspen Update Agent to install Aspen V11.0 Cumulative patch 2. No doubt it could work for other patches. Simply follow the same routine as described above, taking great care to update the vbs file with the correct msp PatchFileName (no other changes are required in the vbs file). Keywords: failure References: None
Problem Statement: To access the GDOT Web Viewer from a business network, port configuration is required to allow the flow of information through the security firewalls. A clear guide is needed to describe the ports that need to be open and the general architecture of servers on a DMZ (demilitarized zone) to allow monitoring on different network levels.
Solution: To get access to the GDOT Web Viewer from a business network a common practice is to install the GDOT Online server and the GDOT Web Viewer server on different machines and place the Web Viewer server on a DMZ network level, basically a buffer zone between the exterior business network and the internal process control network with firewalls in between. Here is a simplified diagram on how this architecture is conformed (highlighted in red are the ports that we will be discussing): These are the two main ports that need to be open through the DMZ firewalls: Port 8000/8001/8091: Only one port is required for communication between the GDOT Web Viewer server and the GDOT Online server (Optimiser, Data Reconciliation applications), by default it is set as either port 8000, 8001, or 8091, the exact port can be reviewed and even modified through configuration files (please review the GDOT Installation Guide and User Guide for detailed instructions). Through this port the GDOT Web license is also validated so no extra port needs to be opened between the GDOT Web Viewer server and the GDOT Online server. The status of this connection can be viewed on the “Aspen SLM License Information” section on the top-right corner. Port 80 (http) / 443 (https): To access the GDOT Web Viewer from an internet browser the communication done through http by default or by https if a SSL certificate is configured on the IIS (Internet Information Services) that hosts the GDOT Web site. Note: Ports 80 and 443 are the default ports on IIS, but these bindings can be changed. If a different port from the default is used, the chosen port is the one that needs to be open through the firewall and that same port needs to be written on the URL when accessing the GDOT Web site. Let’s set as a quick example that we change the http port from 80 to 90. On IIS -> Default Web Site -> Edit Bindings it would look like this: And the URL on the web browser would need to be written like this, where GDOTV14 is the name of the GDOT Web Viewer Server: This setup is a way to ensure that client machines on the business network do not directly interact with the servers inside the process control network, while still having access to monitor the GDOT Optimiser and Data Reconciliation applications. Just a reminder that the GDOT Web Viewer website is a read-only tool, so it allows the display of data coming from the GDOT applications, but cannot change or affect the GDOT applications themselves. Keywords: GDOT Web Viewer, DMZ, GDOT Online, port, firewall References: None
Problem Statement: Some customers encounter an issue when attempting to initialize an Aspen SLM Network License. The error message states, You are attempting to install this license file on a remotely connected machine, but the license file is not valid for remote connection (it is missing the 'SLM AllowRemoteSession' license). This knowledge base article aims to identify the root cause of this error and provide a step-by-step
Solution: to resolve it. Solution This issue can be resolved by following the steps outlined below: Direct User Connection: To resolve the SLM_AllowRemoteSession error, it is important that the user connects directly to their machine without using remote control. Ensure that the user is physically present at the machine where the Aspen SLM Network License is being initialized. Delete Standalone Licenses: it is necessary to delete any standalone licenses present in the following locations: C:\Program Files (x86)\Common Files\AspenTech Shared or C:\Program Files\Common Files\AspenTech Shared. Removing these licenses will help resolve the SLM_AllowRemoteSession error. Initialize License: Once the user is directly connected to their machine and any standalone licenses have been deleted, attempt to initialize the Aspen SLM Network License again. Follow the standard procedure for initializing the license. By following these steps, customers experiencing the SLM_AllowRemoteSession error should be able to resolve the issue and successfully initialize their Aspen SLM Network License Keywords: SLM, AllowRemoteSession, Network, Standalone, initialize, Licnese References: None
Problem Statement: Which versions of OPC standard does Infoplus.21 OPC DA Server supports?
Solution: Infoplus.21 supports several version to include OPC DA 1.0, 2.0 and 3.0. OPCServer: IUnknown IOPCServer IOPCCommon IConnectionPointContainer IOPCItemProperties IOPCBrowse IOPCItemIO OPCGroup: IUnknown IOPCItemMgt IOPCGroupStateMgt IOPCGroupStateMgt2 IOPCSyncIO IOPCSyncIO2 IOPCAsyncIO2 IOPCAsyncIO3 IOPCItemDeadbandMgt IOPCItemSamplingMgt IConnectionPointContainer Keywords: OPC DA Infoplus.21 OPC References: None
Problem Statement: This knowledge base article illustrates how to manage the timeline in AspenOne Process Explorer trend view
Solution: The Trend View feature in AspenOne Process Explorer (A1PE) allows engineers to visualize process data over time and identify trends and patterns that can be used to optimize plant performance. One important aspect of the Trend View feature is the timeline area, which determines the time period that the plot displays. This article provides a comprehensive guide on how to manage the timeline in A1PE trend view. The timeline area includes a sparkline area, left and right buttons, a time span slider, and time-based zoom links. To manage the timeline, follow these steps: 1. Adjust the span of the window by dragging one of the ends of the window, clicking one of the span buttons, or editing the time range manually through the Timespan and Timeline Settings. 2. Move the viewing window to the centre of the selected area by clicking anywhere outside of the viewing window. 3. Slide the window to the desired area by clicking and dragging inside of the viewing window. 4. Shift the viewing window left or right by 0.5 of the span length by clicking the arrows. 5. Turn Real-Time Mode on and off by clicking the clock icon. When the clock is green, Real-Time mode is ON. When the clock is red, Real-Time mode is OFF. 6. Read the X-Axis labels, which clearly indicate the start time, end time, and span of the viewed time range in the trend. By following these steps, you can effectively manage the timeline in A1PE trend view and gain valuable insights into plant operations. The timeline area is an essential component of A1PE trend view, allowing engineers to view process data over time and identify trends and patterns. By mastering the timeline management tools, you can optimize plant performance and improve overall efficiency. Keywords: A1PE, Timeline, slide bar, span, trend view References: None
Problem Statement: Custom agents were released in Aspen Mtell V14.0.1 and allow users to create agents based on custom models they develop. Custom agents require that the model must expose a well-defined web end point that can be called as an API service. This article provides an example script and gives instructions on how to spin up an API service using the Jupyter Kernel Gateway web server for one of these custom agents. The example script is not meant to be monitored as a live agent in Aspen Mtell and is solely provided as an example to guide you in formatting your own custom models. Documentation on the Jupyter Kernel Gateway can be found at the following link: https://jupyter-kernel-gateway.readthedocs.io/en/latest/index.html
Solution: Install Python on the machine where you want to host the API. It does not have to be the Mtell server, but the Mtell server should be able to access the API host machine. Python can be downloaded here: https://www.python.org/downloads/ Select the checkbox to Add python.exe to PATH If disable path length limit appears at the end of installation, select that option. Restart the computer, so the path can take effect Open Windows PowerShell as an administrator Run the following command to install Jupyter Notebook: py -m pip install notebook Install a source-code editor that is compatible with Jupyter Notebook, such as Visual Studio Code https://code.visualstudio.com/ Write your custom model as a Jupyter Notebook file An example custom model is attached If writing your own model, you will need to run the model before putting it online to generate metadata for the file. Since there is no input when the model is expecting one, you will need to close your file to end the run. You can check that the metadata is there by opening the .ipynb file in a text editor You will need an appropriate Python environment available on the host machine. The standard way to do this with Python is to create a virtual environment. In the PowerShell window, change to the path where you want to save the virtual environment. Save this environment in the same location you saved your .ipynb file. cd “path where you want to save the virtual environment” Create the virtual environment. Replace EnvironmentName with a name to give your environment. py -m venv EnvironmentName Activate the virtual environment. Replace EnvironmentName with the name you gave your environment. .\EnvironmentName\Scripts\activate Your environment can be deactivated with the command: deactivate Install any prerequisites to run your Jupyter Notebook custom model For the simple example model, you will require the following prerequisites. For models you have written, more may be required. Install with the following commands. pip install jupyter-kernel-gateway==2.5.1 pip install jupyter-server==1.23.3 pip install jupyter_client==7.4.8 Start the Jupyter Kernel Gateway to put the model online as an API. Save the attached demo.bat file in the same folder as your .ipynb file. It should be the current folder in PowerShell. Edit the demo.bat file. Replace jupnb.ipynb with your file name, apm with your server IP address or name, and 8888 with the desired port for the API. These edits will define the URL you will use to connect to the API. The current demo.bat file uses http://apm:8888/jupnb. When connecting to the API, you can make the same replacements in this URL as you did in the batch file. jupnb should be replaced based on which method you are calling. Run the attached demo.bat file through PowerShell with the command .\demo.bat To take the model offline, you can use CTRL+C Your custom model can now be called as an API. Set up and test a custom agent from within System Manager. See this KB for details. See below for a summary of the provided example model. Input Variables: x1 x2 Parameters: factor Output Variables: y1 – this variable is equal to x1 + factor*x2 y2 – this variable is equal to x2 + factor*x1 JSON Request Format: { inputVariables: [ {name: x1, values: [5.8]}, {name: x2, values: [70]} ], parameters: [ {name: factor, value: 1} ] } Keywords: Custom model Custom model API Custom agent Jupyter notebook agent Jupyter kernel gateway References: None
Problem Statement: This article aims to explain how the different components of APC tools work together in the RTE platform and how data is transferred between them. The RTE platform refers to using the product Aspen DMC3 Builder for building, configuring and deploying an APC controller.
Solution: Figure: RTE Architecture diagram APC Online Server The RTE Service is responsible for starting/stopping and scheduling RTE Application processes that perform the DMC3 controller cyclical processing The RTE Service is also responsible for starting/stopping IO Facility processes (one for each IO Source that is configured in Configure Online Server). Possible sources are Cim-IO, OPC DA, Process Data and APC Gateway (see the FAQ below for more information on APC Gateway). Each IO Source is an independent process acting as a Cim-IO client connected to a Cim-IO Server All RTE Application processes perform the following tasks (in order) during each controller cycle: Receive signal from RTE Service scheduler to run Request IO Reads from each related IO Source Perform Input Calculations Execute control engine Perform Output Calculations Publish entry changes to the Node Repository Request IO Writes for each affected IO Source Go to sleep and wait for next activation The RTE Service has a in memory database called the Node Repository, which contains all the entry information regarding deployed applications When a value changes for an entry, either from PCWS or a DMC3 controller or the DCS, the final value resting place is an entry object in the Node Repository Entries have built-in validation rules which means that validation occurs at the entry-level as the data changes (PCWS data entry, IO Source reads, controller updates). There is also special validation in the controller engine execution such as the Extended Setpoint Validation. Aspen Watch Server WatchDataPump.exe program – this is a node repository client, which maintains a constantly updated repository containing whatever applications it is subscribed to In Aspen Watch Maker > Add Online Host Name, when you specify an RTE Type connection, it makes a repository client connection to a RTE Node Repository Whenever values change in the RTE Node Repository, typically at the end of the control cycle, RTE Node Repository gathers that list of changes and sends it to the subscribed repository clients (i.e., Watch Data Pump would be one of those) For ACO platform DMCplus/DMC3 controllers, Aspen Watch “pulls” a copy of the entire context and scans for data changes from the Cim-IO DMCplus Context Service. For RTE platform, the Node Repository “pushes” changes in the data to AW via the subscribed Watch Data Pump connections which is more efficient. There is no “store-and-forward” capability for capturing RTE data while IP.21 or the AW server is down, so there will be gaps in data for those periods. APC Web Server The Aspen APC Web Provider Data Service is a repository client that mirrors the applications available in the Data Service connections (PCWS > Configuration tab) ATControl and AspenAPC web site tables and values that you see are displaying data stored in the Web Provider Data Service repository When you change a value on the web page, it changes the value in the Web Provider Data Service repository, after initial validation and based on role permission. Then it sends it to the subscribed service (i.e. the RTE Service). The RTE service validates the change and updates its own repository, and (if applicable) sends that change immediately to a connected IO Source tag name Values displayed in the web interface from Aspen Watch are requested through IP.21 data queries Frequently Asked Questions Are there any CIMIO configurations to help performance in RTE? The RTE Cim-IO tuning is no different than the ACO platform. The List size, Timeout, etc. can all be set in Configure Online Server - IO tab, for the IO Source. We typically recommend a 15-30 second timeout and a Frequency of 0 or 5 depending on which the OPC server behaves better with. List size can typically be 200 – 600. Additional load may be occurring due to limits that are written when calcs are executed depending on the IO Flags settings. What are some files/tools to diagnose the RTE environment? Below are diagnostic tools and folder locations to check for errors and diagnostic information: ReadErrors and WriteErrors log files are located in: C:\ProgramData\AspenTech\RTE\Vxx\Clouds\Online\logs Setting the DiagnosticPrintCounter > 0 creates print files in: C:\ProgramData\AspenTech\RTE\Vxx\Clouds The DiagnosticPrintMode controls how print files are generated: Normal (.prt) showing transformed values and Buffer (.prb) without transformed values. lpqp (.bin) files (for analysis by Aspen) are generated automatically when solver errors occur. These are stored in the same location as the print files. Composite Online error and output files can be found in: C:\ProgramData\AspenTech\RTE\Vxx\Clouds\CompositeOnline Aspen APC IO Logging Service - You must start this service manually and allow it to collect data for your RTE applications. Configure Online Server – Logging tab, is used to Extract the logs to a CSV file. You can then use Excel to filter the logs and find IO errors or information about how long operations are taking. See KB 000099961 and the tool's Help files on how to use APC IO Logging tool. DebugView - downloaded from MS sysinternals.com; see KB 000065723 for how to use DebugView. How do you access entry data (like the DMCplus Context Service) from other RTE applications? In V14 you can use the APC Gateway to handle this. In Configure Online Server > IO Tab click “Add APC Gateway Source” and add a source for the local or remote application to connect with. In the Deployment node of your application, specify the IO Source and entry path in the “IO Source” and “IO Tag” fields of the desired entry. Is there a way to schedule an application externally (like the ACO application cycle wait mode: WTMODE)? That is not supported. All application scheduling is done internally for the RTE platform. Keywords: RTE, architecture, apc, web, watch, online, dmc3, repository, data, transfer, setup, communication, cimio References: None
Problem Statement: It is possible to add Aspen InfoPlus.21 as a linked server in Microsoft SQL Server. This will allow for querying of Aspen InfoPlus.21 data from Microsoft SQL Server tools.
Solution: To add Aspen InfoPlus.21 as a linked server, an ODBC datasource needs to be first created using the AspenTech SQLPlus ODBC driver. 1. On the Microsoft SQL Server machine, open the ODBC Data Source Administrator by going to Start | Settings | Control Panel | Administrative Tools | Data Sources (ODBC). 2. Select the System DSN tab and click the Add button. 3. In the Create New Data Source window, select the AspenTech SQLPlus driver and click the Finish button. 4. Click on Advanced button and uncheck Use Aspen Data Sources(ADSA) 5. Provide a Name and Description for the ODBC data source. Type Aspen InfoPlus.21 ServerName in TCP/IP host and TCP/IP Port is by default 10014.Click OK to save the data source 6. Click on Test to check if the connection is OK. Once the data source has been created, it can be referenced in the linked server. Microsoft SQL Server 2005 1. Open the Microsoft SQL Server Management Studio. Expand the Server Objects. 2. Right mouse click on the Linked Server and select New Linked Server 3. Provide a name for the Linked Server and select the Other data source radio button. 4. Under the Provider name list, select Microsoft OLE DB Provider for ODBC Drivers 5. In the Data Source field type in the name of the ODBC datasource that was created above. Type product name. The other fields are not required. Click OK to save the linked server creation. Keywords: Linked Server Microsoft OLE DB Provider for ODBC Drivers References: None
Problem Statement: This article lists which files a user may be interested in making back-up copied of when migrating the Production Control Web Server (PCWS) to a new version and/or server.
Solution: There are not many files that would need to be copied over during migration for standard PCWS installations. However, if the user had made a lot of custom configuration changes to the web display, then these files would need to be identified and migrated over to avoid the manual re-configuration. TIP: check the Date Modified properties of custom configuration files mentioned below to see if when they were implemented and verify whether migration is necessary if they are still applicable. In a typical APC configuration, the PCWS also acts as the Security Server. In this case, users may be interested in copying over the AFW Security User Roles and Permissions, for which the detailed procedure can be found here: https://esupport.aspentech.com/S_Article?id=000094313 If the user has any Preconfigured Variable Groups for controller applications, the procedure to migrate those can be found here: https://esupport.aspentech.com/S_Article?id=000054671 If the user has any custom configuration saved for PCWS > Online tab > Variable Plots view, they can copy over this file to the new server: C:\ProgramData\AspenTech\APC\Web Server\Config\UserProfile.config This file also contains other web display settings; plot group settings are at the end of the file. If the user account has changed, this file can also be edited to migrate the custom configuration settings to the new username. If there have been Column Sets added to the web page for User-Defined Entries, or changes made for Application Entry Overrides, the following files would have been modified by the user and can be copied over to the new server to retain these settings: C:\ProgramData\AspenTech\APC\Web Server\Products\DMCplus\DMCplus.User.Display.Config C:\ProgramData\AspenTech\APC\Web Server\Products\APC\APC.User.Display.Config C:\ProgramData\AspenTech\APC\Web Server\Apps\<ServerName>\ApplicationName.User.Display.Config Some other directories that the user may have copied over saved configuration files are: C:\inetpub\wwwroot\AspenTech\ACOview\Plots C:\inetpub\wwwroot\AspenTech\ACOview\Reports C:\inetpub\wwwroot\AspenTech\ACOview\RTOplots C:\inetpub\wwwroot\AspenTech\ACOview\RTOreports The above mentioned ACOview\Plots directory was the original location where users could manually copy .xml files from saved Web.21/A1PE History Plot or KPI Plot configurations for them to show up under PCWS > History tab > Plot Files. The other directory that can be used for this same purpose is: C:\inetpub\wwwroot\AspenTech\Web21\Plots The procedure to copy these custom History Plot configurations so they show up under History Plot Files can be found in this KB article: https://esupport.aspentech.com/S_Article?id=000080311 Note that these .xml Plot Files are actually referencing the Aspen Watch Server so if the Aspen Watch server name has also been changed, these .xml files need to be updated with the new Data Source Name for the new Aspen Watch server. If the user wishes to copy over the files from PCWS > History tab > Plot Lists, these lists are stored in the Aspen Watch IP.21 database (definition record AW_TagListDef) and there isn't a direct way to copy these files over alone between web servers. When the Aspen Watch database is migrated/upgraded, then these plot lists would be migrated along with it. The detailed procedure for migrating Aspen Watch can be found here: https://esupport.aspentech.com/S_Article?id=000075689 If Benefits Monitoring has been enabled for the web server, then the following file was modified and can be copied over to the new server: C:\inetpub\wwwroot\AspenTech\ProcessData\AtProcessDataREST.config The following file contains general customizations for the PCWS web site as a whole that may be useful (some of the new APC Viewer settings are stored in this file such as default plot groups): C:\ProgramData\AspenTech\APC\Web Server\Config\SiteProfile.config The settings under PCWS > Preferences tab such as language, history plot option, etc are saved under Browser Cookies so those may not be transferrable via copying files over. Instead, you may consider taking a screenshot of the settings in that tab for reference after the upgrade so it can be changed to match those preference settings. Note that these preferences are user-specific so each user account may need to update their individual preferences. Similarly, if the standard Column Sets were changed for display of Online Operations or Engineering views, it may be useful to take screenshots of those pages to then replicate it after the migration by going to Configuration tab > Column Sets. Flowsheet Customizations (V12.0 and later) The Flowsheet viewpoint settings for DMC3 applications in the APC Viewer are stored in different locations depending on the version of PCWS. V12.0 and V12.1: C:\inetpub\wwwroot\AspenTech\AspenAPCFlowsheet\ClientApp\build\uploads C:\inetpub\wwwroot\AspenTech\AspenAPCFlowsheet\DataFiles V14 and later: C:\ProgramData\AspenTech\APC\Web Server\Flowsheet These files will need to be migrated to the new server, and if migrating to V14, they will need to be moved to the new V14 location. Additionally, the flowsheet.db file stores all the references to APC application variables that are displayed in the flowsheet views. if one or more of the applications is moved to a new online server, then you will need to update the flowsheet variables to reference the new application name or online host name. There is a Flowsheet migration tool that will help in this process. For more information, see the APC Viewer web help topic Migrating APC Viewer Flowsheet Configuration to a New Web Server. Keywords: PCWS, migration, upgrade, version, web, custom, display References: None
Problem Statement: Does Aspen Plus support files and folders that use special character used in other languages?
Solution: Aspen Plus can support file or folder names with local language characters if the regional settings and system locale is set to that language in the Windows Settings. When opening a file or using a folder with a special character, Aspen Plus may give the error Invalid Filename: This file cannot be opened or saved because it uses characters not supported on the current system locale. Please rename the file using the character set built into this locale, or switch your locale to one consistent with the language used to name the file. Keywords: None References: : VSTS 869326
Problem Statement: Networking technologies are under constant attack by third parties trying to get information they are not entitled to. These vulnerabilities are identified as they are found, and tracked on the CVE tracking site https://cve.mitre.org/cve/. The instructions provided are to upgrade Tomcat to address the CVE items identified in the tracking site.
Solution: /Workaround 1. Read all instructions before executing them. a. IMPORTANT: If the administrative access popup displays during any of the instructions, the access will have to be allowed for a successful upgrade to be performed. b. The directions for navigating directories are written for 64 bit installations. For 32 bit installations, replace the Program Files with Program Files (x86). 2. Determine the installation location for Tomcat. 3. Download Tomcat zip file from the official Apache Tomcat Website. a. If the current Tomcat is installed in the Program Files directory structure, download apache-tomcat-7.0.56-windows-x64.zip b. If the current Tomcat is installed in the Program Files (x86) directory structure, download apache-tomcat-7.0.56-windows-x86.zip 4. Open the Windows Services Application. a. Stop the Apache Tomcat service. b. Write down the account used to run the Apache Tomcat service. 5. Open a Command Prompt with Administrative Privileges a. Navigate to Tomcat bin directory (C:\Program Files\Common Files\ AspenTech Shared\Tomcat<version>\bin) b. Execute the command ‘service remove’. 6. Unzip the Tomcat zip file you downloaded in step 3 into the C:\Program Files\Common Files\AspenTech Shared directory. 7. Open a Windows File Explorer. a. Rename the apache-tomcat-7.0.56 directory to Tomcat7.0.56 b. In the Tomcat7.0.56 directory, delete the conf, logs, temp, webapps, and work directories. c. Copy the conf, webapps and appdata directories from C:\Program Files\Common Files\ AspenTech Shared\Tomcat<version> to C:\Program Files\Common Files\ AspenTech Shared\Tomcat7.0.56. d. For 32 bit installs, delete the C:\Program Files (x86)\Common Files\AspenTech Shared\Tomcat7.0.56\conf\AspenSearch.keystore. 8. Edit the Tomcat7.0.56\conf\Catalina\localhost\solr.xml a. Replace the instance of Tomcat7.0.x with Tomcat7.0.56, where the x is the previous version of Tomcat. b. Save the file. The file may have to be saved to the desktop and copied back into the correct directory. c. Close the editing program. 9. Edit the C:\Program Files\Common Files\ AspenTech Shared\Tomcat7.0.56\conf\Catalina\localhost\AspenCoreSearch.xml a. Replace the instance of Tomcat7.0.x with Tomcat7.0.56, where the x is the previous version of Tomcat. b. Save the file. The file may have to be saved to the desktop and copied back into the correct directory. c. Close the editing program. 10. Edit the C:\Program Files\Common Files\ AspenTech Shared\Tomcat7.0.56\conf\server.xml a. Replace the instance of Tomcat7.0.x with Tomcat7.0.56, where the x is the previous version of Tomcat. b. Save the file. The file may have to be saved to the desktop and copied back into the correct directory. c. Close the editing program. 11. Return to the Command Prompt a. Navigate to the C:\Program Files\Common Files\ AspenTech Shared\Tomcat7.0.56\bin directory. b. Execute the command ‘service install’. c. Close the Command Prompt. 12. Return to the Windows Services Application. a. Refresh the status of the services. b. Edit the properties for the Apache Tomcat service. c. Change the Startup Type to Automatic (Delayed Start). d. Click on the Log On tab. e. Enter the information for the account used to run the previous version of Tomcat (saved in step 2b.) f. Click on the OK button. g. Start the Apache Tomcat service. h. Close the Windows Services application. 13. Verify aspenONE is functioning correctly. NOTE These following KBs below should only be used for our V10.1, V11.x and V12.x releases instead: Default Tomcat Additional version of Tomcat we support for A1PE if upgraded using our development and QE tested KB for this upgrade. KB for upgrading version of A1PE Supported Tomcat version V10.1 – Tomcat 8.0.36/Java8 Tomcat 8.5.73/Java8 https://esupport.aspentech.com/S_Article?id=000099554 V11 – Tomcat 8.5.23/Java8 Tomcat 8.5.73/Java8 https://esupport.aspentech.com/S_Article?id=000099555 V12 – Tomcat 9.0.27/Java11 Tomcat 9.0.56/Java11 https://esupport.aspentech.com/S_Article?id=000099556 Keywords: aspenONE, Search, SOLR, Security References: None
Problem Statement: How do you model a spheripol high-impact polypropylene process?
Solution: ThisSolution includes simulation files and documentation for a detailed process model of the Spheripol high-impact polypropylene process for Aspen Plus V8.8 or higher. The model includes validated physical properties based on the PC-SAFT method and a detailed reaction model with representative kinetic parameters for a typical multi-site supported Ziegler-Natta catalyst. Spheripol PP_Impact_Copolymer_PCSAFT_v11.apwz includes a 64-bit compiled .dll file and will run in Aspen Plus V11 and higher. Spheripol PP_Impact_Copolymer_PCSAFT_v88.apwz includes a 32-bit compiled .dll file and will run in Aspen Plus V88 to V10. The model predicts the molecular weight distribution of the product, key properties such as the melt flow ratio (MFR) and weight percent ethylene, and many other properties. The model also provides a full mass and energy balance of the entire process. Once tuned against an existing plant and specific product grade chemistry, the model can help polymer producers better understand, debottleneck, and optimize operations. Keywords: Aspen Polymers, Polypropylene, PP, HiPP, Spheripol, Ziegler-Natta, Metallocene, EPR, high-impact, ethylene-propylene rubber, copolymer References: None
Problem Statement: This knowledge base article illustrates how to install Internet Information Services (IIS) for AspenOne Process Explorer
Solution: AspenOne Process Explorer is a powerful tool for monitoring and analysing industrial processes. To ensure that it functions properly, it is important to install the required Internet Information Services (IIS) roles on your server. In this article, we will provide step-by-step instructions for installing IIS roles for AspenOne Process Explorer. To install IIS roles for AspenOne Process Explorer, please follow the steps below: Open the Server Manager or Add Windows features menu. Select the Web Server (IIS) role from the list of available roles. In the Web Server (IIS) window, select the following roles: Common HTTP Features (all items except WebDAV Publishing) Health and Diagnostics (all items). Performance (all items except Dynamic Content Compression) Security: Request Filtering Basic Authentication Digest Authentication Windows Authentication Application Development (all items) Management Tools (all items except Management Service) Click Next to continue and review the Confirmation window to ensure that the correct roles/features have been selected. Click Install to begin the installation process. Wait for the installation process to complete. By following these steps, you can install the required IIS roles for AspenOne Process Explorer. It is important to note that these steps may vary slightly depending on the version of IIS being used. Keywords: IIS, AspenOne process Explorer, Roles References: None
Problem Statement: This knowledge base article illustrates the issues caused by multiple identical application names in Aspen Production Control Web Server and Aspen Watch Performance Monitor
Solution: Aspen Production Control Web Server and Aspen Watch Performance Monitor are powerful tools used for monitoring and analysing industrial process data. However, having multiple identical application names has been a common issue encountered by users of these tools. In Aspen Production Control Web Server: if multiple applications are named identically and hosted on the same online applications server, only one of these applications will appear in Aspen Production Control Web Server pages. This can lead to confusion and the incorrect assumption that the other applications are not available for monitoring or use with Aspen Production Control Web Server. In Aspen Watch Performance Monitor: Similarly, when duplicate application names are detected in Aspen Watch Performance Monitor, the new application may not be added to the database or may not have historical data collected. This can result in incomplete data analysis and adversely impact the decision-making process for optimizing industrial processes. To avoid these issues, it is important to ensure that all applications hosted on the same online applications server have unique names. This can be achieved by following the guidelines provided by AspenTech and regularly checking for duplicate application names. Multiple identical application names can cause issues with Aspen Production Control Web Server and Aspen Watch Performance Monitor. By ensuring that all applications hosted on the same online applications server have unique names, users can avoid confusion and ensure that all applications are properly monitored and analysed. This will help to optimize industrial processes and improve decision-making for process improvements. Keywords: Application, identical, PCWS, Watch performance, duplicate References: None
Problem Statement: How to check the TBP and boiling point curve for stream in Aspen Plus?
Solution: When we are dealing with petroleum application and have the property input in form of distillation curve. We input this data using Assay/Blend option in our property environment. In order to do the result analysis for streams to verify the properties, product analysis we need to do the analysis in form of TBP curves as well. Normally under the result tab of material stream Wt% curve and TBP curves are not activated and you can see the result. In order to check the boiling point curve result for any material stream we need to add property set for the boiling point curve which we want to check and then verify the result in the result tab of material stream. For adding and analyzing the curve follow the following steps: Add the property set of distillation curve such as D86 curve on weight basis under the property set option in simulation environment. 2. Under the setup option go to report option tab and under stream tab select property set and add the property set created for boiling point curve in step number 1 3. Reset and run the simulation 4. Go to stream summary tab to check the results for stream for boiling point curve on weight basis. Note: As per Aspen Plus to generate distillation curves for a stream, a stream must contain at least 5 pseudo components of non-zero flow to generate distinctive data points at 10%, 30%, 50% etc Keywords: Petroleum application, boiling point curve, TBP analysis References: None
Problem Statement: How can the freezing of water in a stream be predicted for a water and methanol
Solution: ? Solution Freezing point depression occurs primarily because the ions decrease the chemical potential of the liquid. The chemical potential of the solid phase remains unchanged. Freezing of ice can also be simulated using Chemistry. SeeSolution 3222. Salt formation and dissociation chemistry also need to be included to accurately model freezing point depression. This can all be set up using the Electrolyte Wizard. Attached is an example of using the Property Set Property TFREEZ to estimate the freezing points for water-NaCl mixtures of varying compositions. The TFREEZ Property-Set can be used to estimate the freezing point of aSolution. The value of TFREEZ is the temperature where a component just begins to freeze-out at a given concentration and pressure. Aspen Plus tries to calculate a value for each component. TSOL is the temperature at which the first component just begins to freeze out of theSolution. PSOL is the pressure (at constant temperature) at which the first component just begins to freeze out of theSolution. The freeze-out temperature is determined from fugacity. Phase composition is determined from the specified conditions. TFREEZ is calculated using the heat of fusion (HFUS) for the solid fugacity, and the liquid fugacity is calculated based on the selected property method. TSOL is the highest TFREEZ temperature for all of the components in that phase. Reliable methods for calculating mixture liquid or vapor fugacity coefficients (PHILMX and PHIVMX) and pure component solid fugacity coefficients (PHIS) are required. Freezing point depression occurs primarily because the ions decrease the chemical potential of the liquid. The chemical potential of the solid phase remains unchanged. Freezing of ice can also be simulated using Chemistry. SeeSolution 3222. As always, there is no guarantee about the accuracy of the results. For NaClSolutions, he results using ASPEN PLUS are compared to values in the literature below. It should be observed that the freezing point stops decreasing after theSolution is saturated with ions. MASSFRAC TFREEZ FREEZING LIQUID POINT NACL WATER (LIT.*) (ASPEN) C C 0.00 0.01 0 0.01 -0.580301 -0.593 0.02 -1.16704 -1.186 0.03 -1.76689 -1.79 0.04 -2.38615 -2.409 0.05 -3.02926 -3.046 0.06 -3.69983 -3.703 0.07 -4.40102 -4.378 0.08 -5.13564 -5.079 0.09 -5.90625 -5.807 0.10 -6.71524 -6.564 0.11 -7.56484 -7.353 0.12 -8.45714 -8.176 0.13 -9.39419 -9.038 0.14 -10.378 -9.94 0.15 -11.4104 -10.888 0.16 -12.4934 -11.885 0.17 -13.6289 -12.935 0.18 -14.8191 -14.044 0.19 -16.0659 -15.216 0.20 -17.3716 -16.458 0.21 -18.7385 -17.776 0.22 -20.1693 -19.176 0.23 -21.6666 -20.667 Data from Handbook of Chemistry and Physics, 52nd ed., p. D-213-214. Keywords: None References: None
Problem Statement: How can the freezing of water in a stream be predicted for a water and methanol
Solution: ? Solution Attached is an example of using the Property Set Properties TFREEZ and TSOL to estimate the freezing points for methanol-water mixtures of varying compositions. The TFREEZ Property Set properties can be used to estimate the freeze-out temperature for a component in either a liquid or vapor mixture. The value of TFREEZ is the temperature where a component just begins to freeze-out at a given concentration and pressure. Aspen Plus tries to calculate a value for each component. TSOL is the temperature at which the first component just begins to freeze out of theSolution. PSOL is the pressure (at constant temperature) at which the first component just begins to freeze out of theSolution. The freeze-out temperature is determined from fugacity. Phase composition is determined from the specified conditions. TFREEZ is calculated using the heat of fusion (HFUS) for the solid fugacity, and the liquid fugacity is calculated based on the selected property method. TSOL is the highest TFREEZ temperature for all of the components in that phase. Freeze-out temperatures can be determined for vapors such as CO2, or for liquids such as water. This example illustrates how a liquid freeze-out temperature can be determined for water. See the TREEZE2 example for an illustration on how to predict a vapor freeze-out temperature of CO2. In this example, the freeze-out temperatures of water are calculated for methanol and waterSolutions using Analysis Property Table. The results calculated by Aspen Plus based on the NRTL physical property option set are compared with valued reported in the literature: MASSFRAC TFREEZ FREEZING LIQUID POINT METHANOL WATER (LIT.*) (ASPEN) C C 0 1.00E-02 0 5.00E-03 -0.2794 -0.278 1.00E-02 -0.5701 -0.56 2.00E-02 -1.1557 -1.14 3.00E-02 -1.7468 -1.75 4.00E-02 -2.3439 -2.37 5.00E-02 -2.947 -3.02 6.00E-02 -3.5566 -3.71 7.00E-02 -4.1728 -4.41 8.00E-02 -4.7959 -5.13 9.00E-02 -5.4263 -5.85 0.1 -6.0642 -6.6 0.12 -7.3639 -8.14 0.14 -8.6978 -9.72 0.16 -10.0689 -11.36 0.18 -11.4804 -11.13 0.2 -12.9359 -15.02 0.24 -15.9949 -19.04 0.28 -19.2821 -23.59 0.32 -22.8417 -28.15 0.36 -26.728 -32.97 0.4 -30.8892 -38.6 0.44 -35.6535 -44.5 0.48 -41.1148 -51.2 0.52 -47.3315 -58.1 0.56 -54.7711 -66 0.6 -63.9138 -74.5 0.64 -76.2465 -84.4 0.68 -95.7561 -96.3 Data from Handbook of Chemistry and Physics, 52nd ed., p. D-198. Keywords: None References: None
Problem Statement: How can the freezing of CO2 in a stream be predicted?
Solution: Attached is an example of using the Property Set Properties TFREEZ and TSOL to estimate the freezing points for CO2 over a range of compositions. The TFREEZ Property Set properties can be used to estimate the freeze-out temperature for a component in either a liquid or vapor mixture. The value of TFREEZ is the temperature where a component just begins to freeze-out at a given concentration and pressure. Aspen Plus tries to calculate a value for each component. TSOL is the temperature at which the first component just begins to freeze out of theSolution. PSOL is the pressure (at constant temperature) at which the first component just begins to freeze out of theSolution. The freeze-out temperature is determined from fugacity. Phase composition is determined from the specified conditions. TFREEZ is calculated using the heat of fusion (HFUS) for the solid fugacity, and the liquid fugacity is calculated based on the selected property method. TSOL is the highest TFREEZ temperature for all of the components in that phase. Freeze-out temperatures can be determined for vapors such as CO2, or for liquids such as water. This example illustrates how a gas freeze-out temperature can be determined for CO2. See the TREEZE1 example for an illustration on how to predict a liquid freeze-out temperature. In this example, the freeze-out temperatures of CO2 in methanol are calculated using an Analysis Property Table. The results calculated by Aspen Plus based on the RK-SOAVE physical property method are compared with values reported in the literature. The results were improved by entering a heat of fusion for CO2 for the simulated temperature range, 1900 cal/mol. This data was obtained from the Perry's Chemical Engineering Handbook, 6th edition, page 3-120. MOLEFRAC VAPOR VAPOR FREEZING CO2 TFREEZ TFREEZ POINT CO2 CO2 (LIT.*) (ASPEN) (ASPEN w/HFUS data) F F F 0.005 -151.59 -154.27 -170.0 0.01 -139.70 -142.09 -157.0 0.02 -126.81 -128.85 -143.5 0.03 -118.75 -120.56 -135.0 0.04 -112.80 -114.43 -129.0 0.05 -108.05 -109.52 -124.0 0.06 -104.08 -105.41 -120.0 0.07 -100.66 -101.88 -118.5 0.08 -97.63 -98.76 -113.3 0.09 -94.96 -95.97 -111.0 0.1 -92.54 -93.46 -101.5 * Data from Hydrocarbon Processing, August 1973, p. 107-108. Keywords: tfreez, freezing, TGS References: None
Problem Statement: Aspen Blend Controller Interface (BCI) is used to create properly formatted blend instructions, and transfer those recipe instructions to a blend control system. It is an interface between a recipe data source to a blend controller system. The recipe sources could be from such as Aspen PIMS, Aspen Refinery Multi-Blend Optimizer (MBO), Aspen Petroleum Scheduler The question here is how do we configure BCI?
Solution: The following procedures explain how to build a BCI model from an MBO database and submit recipes to the blend controller system. 1. Run MBO, select event 'Gasoline Blending', from Menu Events | Publish | Export Blends, This blend event then will be published to database tables AB_BLN_EVENT AB_BLN_QUANITIES AB_BLN_RECIPES AB_TANK_QUANITIES AB_ADDITIVES 2. Run PimsBCI from Event | Blend Controller Interface, or from the MBO application folder by double click file PIMSBCI.exe to create a model to transfer MBO blend information to BPC. 3. Inside the Aspen BCI window, open 'New' from File, or click the icon 'New (Ctrl +N)' to bring up a 'Model Settings' window, and fill in the fields highlighted according to proper address and files. After you click OK, a popup window will ask you to 'Select a BCI Model Directory'. This is the directory, the BCI model will be saved to. Click 'OK'. 4. Now, the BCI window will look like the following. Notice that under 'Tables', there are no models (or Excel files') attached. We need to build the tables and attach them. Create BCI Tables. Here we create a file, called 'BCI_Tables.xlsx'. The tables that are required in order for a BCI model (Honeywell BPC-MBO) to work correctly for version 2006.5 and higher depend on the presence of the BLEND.CFG files. Please refer to HELP for more information. We will choose the example that does not have the presence of BLEND.CFG. If your model does not contain BLEND.CFG files under the model directory: BCI Table Required Blenders Yes Components Yes Properties Yes Tanks Yes ProdMap Yes MaxComp Yes Specs Yes GlobalSpecs Yes CustomMap Yes AddiMap No Additive No BlendValues No SpareValues No BMSCustomMap No If a BCI table is indicated as required, you must add the spreadsheet table to the model tree. This added table should be left empty if you are not using information from this table. For example GlobalSpecs is a mandatory table for the Honeywell - MBO Recipes model. If this table is not needed, you would add an empty table with no values but the header as shown: Quality MinSpec MaxSpec OverrideMin OverrideMax Create the following tables referring to the MBO model in BCI_Tables.xlsx ProdMap - from AB_BLN_EVENTS [PRODUCT] CustomMaps - associated with T.Blenders Recipe Properties - from AB_BLN_QUANITIES Recipe Components - from AB_BLN_RECIPES Tanks - from AB_TANK_QUALITIES 5. Go to 'Model Settings', change the 'Misc Blend Recipe ID' to 'MACRO'. Then RUN | Refresh Tables. 6. Make sure the MBO recipe is all green. Then Right click the recipe to select. 7. From Menu RUN | Submit Recipe. PIMSBCI will generate a file 'RECIPE.BPC' under the model directory. Keywords: BCI Blend Controller Interface Recipe submit procedures RECIPE.BPC BLEND.CFG Honeywell BPC References: None
Problem Statement: How can user customized material databases be distributed to other PCs also running the Aspen Teams program?
Solution: Copy the user database files listed below to the PCs needing access to the database files. The files are located (as as default) in the B-JAC 12.0 \ DAT \ PDA folders. Note you can relocate these to any specified folder (specify the location in the Tools / Program settings / Files section of the B-JAC program). It is possible to locate these customized database files at a central location on a network server to support multiple PCs on a network. N_MTLDEF.PDA Default materials for generic materials (ASME) N_MTLDIN.PDA Default materials for generic materials (DIN) N_MTLCDP.PDA Default materials for generic materials (AFNOR) N_PARTNO.PDA Part number assignment for bill of materials N_PRIVI.PDA Private properties materials databank index N_PRIVP.PDA Private properties materials databank properties N_STDLAB.PDA Fabrication standards, procedures, costs, etc. N_STDMTL.PDA Fabrication standards as function of materials N_STDOPR.PDA Fabrication operation efficiencies N_STDWLD.PDA Fabrication welding standards N_STDPRC.PDA Private materials prices You should open the Materials Database through the Tools / Data Maintenance / Materials Database and then go to the Users materials, select any user material and then temporary make a change to any item and then change it back. Then save the changes. This will reset your material index files so they will show all the user created materials in the Teams search engine. Keywords: User customized materials database files References: None
Problem Statement: Clients are not able to access the Batch areas from Aspen Production Record Manager (APRM) client applications. In the Batch Query Tool, the error Unable to retrieve any areas from datasource displays when trying to run a query. There are several possible reasons for this error which are discussed in the
Solution: section below. Solution The first possibility is that APRM Security is preventing the user from accessing the batch area(s). APRM Security is set in the APRM Administrator. To check if security is enabled for the area, right mouse click on the area and select the 'Security' tab. Users must have Read access granted to be able to select the area from one of the client tools. Another cause of the error is when the client machine has a User Data Source set up in ADSA that has the same name as the data source on the public ADSA Server. To check if a User Data Source is defined, open the ADSA Client Config Tool on the machine giving the error and click the User Data Source button. If a User Data Source is defined, it needs to be removed or edited so that the Aspen Production Manager Service is not listed. A third cause of this error is due to DCOM settings on the APRM Server which prevent the user on the client machine from accessing the list of APRM areas. This most commonly occurs with Windows Servers because of the default values DCOM is set to on Windows Servers. To check that users have necessary DCOM permissions to access the APRM areas Start Control Panel Select Administrative Tools Select Component Services Expand Component Services Expand Computers Right-click My Computer and select Properties Select the Default COM Security tab Click the Edit Limits button for Access Permissions. Client accounts must be granted Remote Access through a user account or domain group, such as the EVERYONE group, ALL Application Packages security object and the Distributed COM Users group which must contain the Authenticated Users group. Click the Edit Limits button for Launch and Activation Permissions. Client accounts must be granted Remote Access through a user account or domain group, such as the EVERYONE group, ALL Application Packages security object and the Distributed COM Users group which must contain the Authenticated Users group A fourth cause of this error is when there are Firewall restriction on either APRM Server or Client Machine. Client connects to the APRM Server via DCOM, as such Default Port 135 and a range of ports must be accessible. A fifth cause of this error could be due to access to the executable. Some APRM servers ordinary users do not have permission to access the APRM server directory “C:\Program Files\AspenTech\Batch.21\Server\”. Specifically, the users need access to the Batch21Services.exe executable on the APRM Server. Ensure User account has Read & Execute, List Folder Contents, and Read permissions for the folder “C:\Program Files\AspenTech\Batch.21\Server\” or simply grant permission to “Everyone”. Finally, in at least one case, it has been reported that removing and recreating the data source definition in ADSA has eliminated the problem. Update - May 2023: (This relates to fourth cause above) One customer has reported that the DCOM port range of 3000 to 4000 (for Aspen Calc) was not being let through the firewall. The Aspen Batch Query Tool (BQT) was trying to use this port range instead of the range 7205-7230 which WAS being allowed to pass through the firewall. After removing the 3000 to 4000 range and rebooting the server the machines started communicating successfully. ThisSolution has some additional details: How do I make sure Aspen Production Record Manager client tools can be used across a firewall? https://esupport.aspentech.com/S_Article?id=000068900 Keywords: dcom Batch.21 access denied APRM References: None
Problem Statement: In Aspen Air Cooler rating checking mode, why the pressure drop gets changed by modifying the allowable pressure drop value even though the geometry is fixed?
Solution: In Aspen Air Cooler rating checking mode, the pressure drop gets changed by modifying the allowable pressure drop value even though the geometry is fixed. For example, in below snap default is 0.12 bar & with this the calculated pressure drop is 0.0624 bar. Whereas if the allowable pressure drop is changed to 0.05 bar then the pressure drop is less than earlier of 0.052 The reason of this behaviour is, rating checking mode is specifically where user needs to provide all the geometry details for the exchanger with existing geometry & if user unknowingly miss to provide nozzle sizes then the pressure drop will be different. As software will predict suitable nozzles based on applicable pressure drop. If user provides the correct nozzle sizes, then EDR rating checking mode need not to assume any nozzles sizes based on the acceptable pressure drop & results would be accurate. Keywords: Air Cooler, pressure drop, nozzle References: None
Problem Statement: According to Aspen InfoPlus.21 Administrator, the database is not running. Aspen Tag Browser will also indicate that InfoPlus.21 is not running. This is odd because you are still able to trend process values in aspenONE Process Explorer. On further investigation, you note that Aspen InfoPlus.21 Manager is showing that the checkbox for the TSK_DBCLOCK task (at the top of the Defined Tasks checklist) is not ticked and consequently is not listed as a running task: Aspen InfoPlus.21 database cannot function correctly without this vital task and so you attempt to resolve the matter. If you attempt to run TSK_DBCLOCK (by selecting it in IP.21 Manager and clicking on RUN TASK button), you get an error dialog stating that TSK_DBCLOCK is not running anymore. If you then open the TSK_DBCLOCK.ERR file you see the following messages: CreateDBSharedMemoryObjects: CreateFileMapping failed. Error:Cannot create a file when that file already exists. CRDBSETC: Error in CreateDBSharedMemoryObjects. Error:Cannot create a file when that file already exists. So you think to try to restart the database by first clicking the STOP InfoPlus.21 button; this will not succeed without further manual intervention. The status bar will apparently hang at the message Waiting for TSK_H21T to save config.dat.... You can harmlessly kill the process associated with TSK_H21T (open Windows Task Manager, select the h21task.exe process and click End task button), but then the status bar in IP.21 Manager will indicate a similar problem with TSK_SAVE. You can use Task Manager again and end the savedb.exe task. InfoPlus.21 Manager will display an error dialog stating Error in InfoPlus21ShutdownRequest. The system cannot find the file specified., but you can dismiss this window and allow the database to complete the shutdown. If any tasks remain running then you can stop them manually using the STOP TASK button in IP.21 Manager. At this point you could start the database and it most likely will start okay, but what caused the problem and how can the operation of the TSK_DBCLOCK task be made more robust?
Solution: Perhaps this is not the first time you have seen this behavior, it could be described as intermittent and may happen when the database is starting up as well as at anytime during the following days or weeks of normal operation. You must check the Windows Event Viewer. If you find any crash events associated with the dbclock.exe (this is TSK_DBCLOCK 's associated executable) then thisSolution is likely to be helpful. Use File Manager to explore the file system and check the file version properties of the STRGXI2 Module: STRGXI2.dll in folder: %CommonProgramFiles%\AspenTech Shared\ If the file version precedes 2022.14.0.715, then it is likely that the headline issue is due to a problem associated with earlier SLM client software. The headline issue has not been observed on a pre-V14.0 Aspen InfoPlus.21 server that has the V14 SLM client software. There is no related source code change in the V14 build compared to V12 release except the V14 build is based on the newer MSVC 2022 C++ runtime. In short, there is some timing issue related to the use of an older MSVC C++ runtime library. You can upgrade the SLM client. For information on this subject, please read the knowledge base article: How do I upgrade the SLM Client on application machines? https://esupport.aspentech.com/apex/s_downloadcenter - V14 aspenONE Software License Manager includes SLM Tools. If you don't see V14 aspenONE Software License Manager, you can request it: https://esupport.aspentech.com/apex/s_mediaupgrade Keywords: Application Error Faulting application name: dbclock.exe Exception code: 0xc0000005 Faulting application path: C:\Program Files\AspenTech\InfoPlus.21\db21\code\dbclock.exe References: None
Problem Statement: Aspen Unified has a very strict list of pre-reqs that needs to be installed before attempting to install the product. One of them is the SQL databases. Here's how to configure the master, input, and results databases for AU:
Solution: First of all, you need to have SQL server database and SSMS installed on your machine. You need to log into your database. Next, please go to command line prompt, and create the necessary databases following the below commands: cd C:\Program Files\AspenTech\Aspen Unified\Admin\binX64 [Enter] PscAdmin.exe create-master-db --databaseServer <ServerName> --databaseName <NewMasterDBName> Example: Using ServerName testserver and database name AUMaster: [PscAdmin.exe create-master-db --databaseServer testserver --databaseName AUMaster] After entering the command, press Enter. After the database is created, you will see a message in the Command Window that says “Database registered”. PscAdmin.exe create-input-db --databaseServer <ServerName> --databaseName <NewInputDBName> [Example: PscAdmin.exe create-input-db --databaseServer testserver --databaseName AUInput] After entering the command, press Enter. After the database is created, you will see a message that says “Database registered”. PscAdmin.exe create-results-db --databaseServer <ServerName> --databaseName <NewResultsDBName> [Example: PscAdmin.exe create-results-db --databaseServer testserver --databaseName AUResults] After entering the command, press Enter. After the database is created, you will see a message that says “Database registered”. Optionally create a site catalog database using the following command. A site catalog allows assets to be shared between planning and scheduling models. Pscadmin.exe create-catalog-db --databaseServer --databaseName <NewSiteCatalogDBName> [Example: PscAdmin.exe create-catalog-db --databaseServer testserver --databaseName AUSiteCatalog] After entering the command, press Enter. After the database is created, you will see a message that says “Database registered”. Open SSMS and verify that all three databases are created successfully. Next, setup SQL logins. You will need to create three SQL login accounts and then associated these accounts with the databases you have created. 1. Using Object Explorer in SQL Server Management Studio, click Security |Logins. 2. Right click and click New Login. You will be creating 2 login accounts: NT AUTHORITY\NETWORK SERVICE NT AUTHORITY\AUTHENTICATED USERS 3. Enter one of the account names from step 2 as the Login Name. 4. Click Server Roles and add sysadmin (keep public) as a server role. 5. Click OK to add the new login. 6. Repeat steps 2-5 for each of the 2 Login Accounts you need to create. 7. This completes the steps to create your required SQL databases. Exit SQL and proceed to migrating your models. Keywords: None References: None
Problem Statement: There might be some scenarios that queue files are overflowing and you have to delete those files for the Aspen Audit and Compliance to work as expected - as per the instructions in the
Solution: How to remove the overflow, primary & secondary queue files Is there a way to recover the Audit information from primary, secondary, and overflow queue files that are deleted?Solution Here are the steps in order to recover the data from the queue files 1. Locate AuditAndComplianceRepairUtility.exe. Since V12.0, this file has been installed into: C:\Program Files\AspenTech\AuditAndComplianceManager\Server If you are using an older version, you can download the attached AuditAndComplianceRepairUtility.exe. Make sure this is not blocked (right click on the exe and verify that you don't see a button to Unblock). 2. Open the utility and browse to the location of the queue files that you want to recover the information Note: If the size of the queue files are greater than 10MB, then adjust the size of the Max File Size in MB field accordingly. 3. Click the Convert to Text button to generate the primary, secondary and overflow files. 4. Stop queue processing using the Aspen Audit & Compliance Administrator (you will be prompted to do so when processing files should you skip this step - Server needs to be stopped). 5. Click the Process Files button to import the text files into the database. You will be provided updates showing it processing nn of mm files. 6. Log files will be generated in the folder detailing: Input File, Output File and information on the events found in each test file, including metrics for the number that were Processed, Inserted, Duplicated and Invalid. 7. Once completed, start queue processing using the Aspen Audit & Compliance Administrator. Keywords: AACM queue primary secondary overflow repair utility References: None
Problem Statement: This
Solution: assumes you are hosting the Aspen Production Execution Manager database on Microsoft SQL Server. Starting Aspen Production Execution Manager MOC, especially after an upgrade, produces error messages like the following onscreen and in the log/debug files when navigating through the product: Database error: Invalid object name 'USER_CHAR' Database error: Invalid object name 'HEADER' (other names may be seen, not just USER_CHAR and HEADER as listed above) What might cause these errors, especially if the product worked prior to upgrade?Solution Open Microsoft SQL Server Management Studio and connect to the server hosting the AeBRS database. Expand it and navigate to Security => Logins => AEBRS account => right-click and select 'Properties'. On the resulting screen select 'User Mapping' from the upper left corner of the dialog. On that screen make sure that there is a mapping for AEBRS (for the database), user is the AEBRS account, and the Default Schema is also set to be the AEBRS account. If the Default Schema is set to something else (like 'dbo') it can cause the problem. But note, the value of Default Schema is ignored if the user is a member of the sysadmin fixed server role. All members of the sysadmin fixed server role have a default schema of 'dbo'. Consequently, AEBRS must not be a member of the sysadmin fixed server role. The following query run in Management Studio has proved useful for resolving related issues: USE AeBRS EXEC sp_change_users_login 'Auto_Fix', 'AeBRS' SQL Server does not necessarily have to be restarted at the conclusion of this operation but it is recommended that Aspen Production Execution Manager MOC be restarted. Keywords: APEM References: None
Problem Statement: The procedure below for re-registering Process Recipe DCOM settings is usually a troubleshooting step when Process Recipe is having DCOM issues.
Solution: To manually register the Process Recipe and Transition Manager Server executables and services: 1. Open the Services applet and, if applicable, stop the AspenTech Calculator Engine and Aspen Transition Manager Server services. 2. Use the Windows Run dialog to start the Component Services program, dcomcnfg.exe. Then navigate to the folder Component Services | Computers | My Computer | DCOM Config. 3. Right-click the ATM_Admin.System object, and from the context menu, select Properties, as shown in the illustration below. 4. In the ATM_Admin.System Properties dialog, click the Identity tab. Then select the option for The launching user account, as illustrated below, and click OK. 5. Exit the Component Services program. 6. Use the Run as administrator privilege to open a Command Prompt window. 7. In the Command Prompt window, navigate to the Program Files (x86)\AspenTech\TransitionManager\Bin folder, where the ATM_Admin.exe file resides, by typing: cd \Program Files (x86)\AspenTech\TransitionManager\Bin 8. Type the following command (see illustration below): ATM_Admin.exe /clean /install domain\username password Where the username and password is a privileged account that has access to the necessary resources. For example, if InfoPlus.21 is being accessed on the same server, then you MUST use the same account that is used to run the Aspen InfoPlus.21 Task Service and the AspenTech Calculator Engine services. 9. Verify that the AspenTech Calculator Engine on the Process Recipe database server is running the 32-bit version of CalcScheduler.exe (see KB 000066959). Important Note: If there are any warnings or errors that are reported in this process, redo the procedure above. There have also been scenarios where it claims success but there are errors or warnings. So, redo it until no errors and no warnings are reported. Keywords: DCOM, troubleshooting, process, recipe, transition, sequencer, APR, APS References: None
Problem Statement: How to model Oligomerization reaction in Aspen Plus?
Solution: Oligomerization reactions involve contacting an olefin with a catalyst in order to product a longer chain molecule. An Oligomer can consist of two or more constituent olefin molecules. This KB mainly focus on how you can model ethylene Oligomerization reaction in Aspen Plus. Ethylene, with huge production worldwide, is the raw material for a wide range of chemical products and intermediates. Industrial reactions of ethylene include in order of scale polymerization, oxidation, halogenation, alkylation, hydration, oligomerization etc. Ethylene oligomerization is of considerable academic and industrial interest because it is one of the major processes for production of linear and branched higher olefins, which are components of plastics (C4-C6 in copolymerization), lubricants (C10-C12 through Oligomerization). In case of ethylene Oligomerization reaction components like 1-Butene, 1-Hexene, 1-Octene etc are considered as the product of the reaction. Since these components are smaller components and have fixed molecular formula and molecular weight they can be considered as conventional components instead of component type as oligomer in Aspen Plus. Polymerization reaction using polymerization reaction kinetics can be only used and is feasible when we have polymer such as (HDPE) as our final product. To model simple Oligomerization reaction and understand the result of the process simple power law reaction can be used to model the reaction instead of polymer reaction to see the product formation in the reactor. Kinetic parameters for the reaction can be tuned to match the desired reaction result. Using the power law reaction kinetics Oligomer product can be formed in the reactor. If Oligomers are formed as a by-product of polymerization reaction such as for HDPE production process, then combination of polymerization reaction using Polymer Kinetics and Power law reaction kinetics can be used to model the reaction effect of polymer and Oligomer reaction. Keywords: Polymerization, Oligomer, Ethylene-Oligomerization References: There are many reference examples which are available for Polymer in Aspen Plus library and can be found at following path: C:\Program Files\AspenTech\Aspen Plus V12.1\GUI\Examples\Polymers\Polyethylene
Problem Statement: VLE curve generated using Aspen Plus analysis for Nitrogen and Hydrogen binary system is having abnormal shape. How to do the T-XY analysis for Nitrogen-Hydrogen system?
Solution: When we try to generate the T-xy diagram for nitrogen and Hydrogen system at low pressure between 1-5 Bar it is showing abnormal curve as shown below. Plot shows that at very low temperature like -245Degree it is complete liquid. For such case in which component exits in liquid phase at very low temperature we need to check what are the actual operating conditions for such process. And at very low temperature components will freeze and will be in solid state in which case VLE curve generation is not possible. To create the TXY plot for Nitrogen and Hydrogen mixture it is possible but very difficult. User may need to regress binary parameters to fit the entire range in which N2 and H2 will be in liquid phase. N2 and H2 will be all liquid at very low temperature and components will also freeze. PXY plots are more reasonable for this kind of scenarios. Still there can be few convergence issues at some points for which regression of the binary parameters are needed. VLE flash in Aspen Plus is solved by equating fugacity’s. However, if you are in a region where an additional phase exists that you are not considering, the results of VLE curves are going to be incorrect. Keywords: TXY diagram, VLE analysis, Nitrogen-Hydrogen Binary system References: None
Problem Statement: This article explains how to remove and install the Aspen InfoPlus.21 Task Service.
Solution: The Aspen InfoPlus.21 Task Service uses the executable image TSK_SERVER.EXE normally located in C:\Program Files (x86)\AspenTech\InfoPlus.21\db21\code for 32-bit installations and in C:\Program Files\AspenTech\InfoPlus.21\db21\code for 64-bit installations. The executable image itself provides a way to remove and install the task service. First, you must determine the internal version number of Aspen InfoPlus.21. You can get this information by using the AspenONE Diagnostics Tool: In this example, the internal version number Aspen InfoPlus.21 is 12.4. To remove the Aspen InfoPlus.21 Task Service, open a command window as an administrator and navigate to the Aspen InfoPlus.21 code folder (normally either C:\Program Files (x86)\AspenTech\InfoPlus.21\db21\code or C:\Program Files\AspenTech\InfoPlus.21\db21\code. Then, enter the command tsk_server.exe /remove group200 and reboot the server. To reinstall the Aspen InfoPlus.21 Task Service, enter the command tsk_server.exe /create group200 /version xx.x where xx.x is the Aspen InfoPlus.21 internal version number. Keywords: InfoPlus.21 Task Service TSK_Server.exe References: None
Problem Statement: How to add more decimal digits for composition in Aspen HYSYS?
Solution: The is only one way to specify composition in the stream. If you need more decimal digits, you should go to File->Options->Units of measure and change Variable Format settings Keywords: Composition, Variables, Format References: None
Problem Statement: Can we use BLOWDOWN utility in aspen HYSYS steady state mode?
Solution: BLOWDOWN Technology in Aspen HYSYS is used to: Design an orifice for pool fire depressurization. Determine the correct construction materials for cold case depressurization. Design a system with staggered orifice opening times to optimize the use of the disposal system. Design an orifice or assess temperature concerns for a pipeline pressurization. Assess risk to the facility based on peak pressures reached in the system during overpressure. BLOWDOWN consumes the HYSYS Dynamics license as it's a dynamic analysis. Hence, it will consume 14 (HYSYS steady state) + 22 (HYSYS Dynamics) tokens. It is not possible to run a steady state BLOWDOWN in Aspen HYSYS. Keywords: BLOWDOWN, depressurization, Tokens, Dynamics References: https://esupport.aspentech.com/S_Article?id=000059604 https://esupport.aspentech.com/S_Article?id=000090682 https://esupport.aspentech.com/S_Article?id=000057029
Problem Statement: This knowledge base article illustrates how to create a new selector record to format IP_DiscreteDef records.
Solution: In this article, we will guide you through the process of creating a new selector record in IP.21 administrator for formatting IP_DiscreteDef records. When using the ON/OFF as IP_VALUE_FORMAT, the expected value is 0 or 1 not any other value. Step-by-Step Guide: Open IP.21 Administrator. Under Definition Record> DefinitionDef, Right-click on the Select#Def record located in the left-hand pane. Select New record defined by SelectxDef... from the context menu. SelectxDef the x refers to the number of characters that you plan to have from the list of selection For example, OFF is the longest of your list of selection so Select3Def is good enough. In the New Record window, enter a name for the new selector record in the Name (up to 16 characters) field. Click the OK button to confirm. In the left-hand pane, select New Record you just created. In the right-hand pane, enter the value 3 for #_OF_SELECTIONS.(or any value as needed) Note: You will need to press the ENTER key to confirm this change. Double-click on the field labelled #_OF_SELECTIONS in the right-hand pane to enter the repeat area. Enter the appropriate information for each Selection Value. Now you have successfully created a new selector record to format IP_DiscreteDef records in IP.21 Administrator. Tips and Best Practices When creating a new selector record, it is important to choose a name that accurately reflects the record's purpose. This will make it easier to identify and use the record in the future. Before creating a new record, make sure that you have selected the correct record type. Choosing the wrong record type can result in errors and data inconsistencies. Creating a new selector record in IP.21 Administrator is a simple process that can help you to format IP_DiscreteDef records according to your specific requirements. By following the steps outlined in this article. Keywords: IP_DiscreteDef, Selector, ON/OFF, IP_VALUE_FORMAT References: None
Problem Statement: This knowledge base article illustrates some A1PE trend view customization techniques usually asked by the customers.
Solution: Many customers asking about some basic trend view customization options and technique, and this KB article consolidated some of them: 1. Is it possible to change the color of trends. To accomplish this, access the Process Explorer options by clicking on the bubble located at the top left-hand corner of the interface, and select the Pen option. It is possible to assign a different color to each number to differentiate the data. For additional information regarding changing the default color of trending pens, please refer to the Aspen IP.21 KB article on this topic. AspenTech: Knowledge Base 2. Is it possible to hide Tags name from a1PE screen display Yes, certain elements of the interface can be tailored to meet individual preferences by utilizing the 'Show' option drop-down list, which is situated in the top right-hand corner of the screen. This feature allows users to select specific display options according to their unique requirements. 3. Is it possible to remove fill under area for the trends. Select the checkbox located adjacent to the target tag name and subsequently click on the 'Pen' icon to initiate the property dialog. Within this dialog, locate the 'Fill Under' option and uncheck it to remove the fill under the trend. Moreover, possible by changing the trend type through plus sign in the top left. 4. Is it possible to remove Grids from background you can simply change the grid lines to white and it will disappear. 5. Is it possible to hide/remove Start/end time and duration. Yes, you can choose what to display through the show option drop down list on the top right. 6. How do I select custom start and end dates in an aspenONE Process Explorer Trend Plot AspenTech: Knowledge Base 7. How to change scale for multiple trends in aspenONE Process Explorer AspenTech: Knowledge Base Keywords: A1PE, Trend, Pen, Color, Scale, Plot, fill References: None
Problem Statement: NIST experimental data is not found when a secure user databank is a Selected database on the Components | Specifications | Enterprise Database sheet. This occurs even though experimental data exists in the system. Cause NIST/TDE Engine uses CAS number as the sole identity for a compound in TDE Engine Database and Aspen Plus must pass CAS number for each compound to TDE database for retrieving data. If the custom database does not have CAS number for each compound and the custom database is on the top of Selected databanks, the CAS number will not be found and will be blank on the Components | Specifications | Selection sheet. If you launch TDE dialog and search binary data, you won't get any data because no CAS numbers are passed to TDE database. The
Solution: is to include CAS number for each compound in the custom database. See the following link for details: How Do I Add or Retain CAS Numbers for New Enterprise Databases Work around TheSolution is to include CAS number for each compound in the custom database. See the following link for details: How Do I Add or Retain CAS Numbers for New Enterprise Databases Alternatively, if the secure user databank is removed from the Selected databanks list and the components are in the standard databanks, experimental data will be found if it exists. Fixed in Version Targeted for a future release Keywords: None References: : VSTS 852491
Problem Statement: You can use the automation methods and properties to build a flowsheet remotely using, for example, Visual Basic for Applications (VBA) in Excel. You may wish use automation to: Automate repetitive changes to flowsheet layout. Add new blocks from a list of unit names and types already existing in Excel. Make modifications to very large flowsheets, where response times can be slow.
Solution: The attached example shows the basic commands for adding a new block to the flowsheet and connecting it to existing blocks. You can use this example in Aspen Custom Modeler V11 or higher. You can use the file simple-simulation4.acmf to see how a few blocks and streams can be created. Open the file then remove all blocks and streams on the flowsheet. You can then invoke the flowsheet script create_blocks. ' declarations for VBA ' dim b as Object ' dim s as Object ' dim pos as Variant ' dim acmApplication as Object ' dim acmSimulation as Object ' dim acmFlowsheet as Object ' trick to make the conversion to VBA code easier set acmApplication = application set acmSimulation = acmApplication.Simulation set acmFlowsheet = acmSimulation.Flowsheet ' clear simulation messages acmSimulation.OutputLogger.ClearWindow ' remove all streams and blocks for each s in acmFlowsheet.streams acmApplication.msg removing stream & s.name acmFlowsheet.RemoveStream s.name next for each b in acmFlowsheet.blocks acmApplication.msg removing block & b.name acmFlowsheet.RemoveBlock b.name next ' create a few blocks number_blocks = 4 for i = 1 to number_blocks model_name = FTank block_name = B & Right(Cstr(i + 1000), 3) set b = acmFlowsheet.AddBlock (model_name, block_name) pos = b.PFSPosition ' get block position pos(0) = 2000 * (i-1) pos(1) = -2000 * (i-1) b.PFSPosition = pos ' set new block location ' change icon - SystemIcon happens to be the last one ' so using i mod 3, we'll never use it for j = 0 to (i mod 3) b.ExchangeIcon next next ' create and connect streams for i = 0 to number_blocks source_block_name = B & Right(Cstr(i + 1000), 3) dest_block_name = B & Right(Cstr(i + 1 + 1000), 3) stream_name = S & Right(Cstr(i + 1000), 3) set s = acmflowsheet.AddStream (FStream, stream_name) if i > 0 then s.ConnectInput source_block_name & .outlet end if if i < number_blocks then s.ConnectOutput dest_block_name & .inlet end if next ' create level controllers for i = 1 to number_blocks block_name = B & Right(Cstr(i + 1000), 3) stream_name = S & Right(Cstr(i + 1000), 3) b_PID = acmFlowsheet.AddBlock(PIDincr, LC_ & block_name) s_PV = acmFlowsheet.AddStream(ControlSignal, level_ & block_name) s_OP = acmFlowsheet.AddStream(ControlSignal, flow_ & block_name) s_pv.ConnectInput block_name & .M s_PV.ConnectOutput b_PID.Name & .PV s_OP.ConnectInput b_PID.Name & .OP s_OP.ConnectOutput stream_name & .F b_PID.Initialize b_PID.Action.value = Reverse next This will create this flowsheet: The same code can be used in VBA in Excel. See the file create-flowsheet.xlsm. You first need to open the simulation file in ACM, then update the path in Excel cell A2, then click the button CommandButton1. The code shows how to create blocks, streams, control signal streams. It also shows how to specify the location of blocks on flowsheet as well as icons. There's no control for the layout of streams. More information can be found in Aspen Custom Modeler on-line help. Look for the automation methods. Keywords: VBA, vbscript, OLE, ActiveX, automation References: None
Problem Statement: The following symptoms may be seen for this issue: “Error executing calculation: Failed to connect to Aspen Process Recipe Data Source” This can occur when trying to use Process Recipe Explorer with AspenCalc calculation attached to the recipe. Could not connect to Transition Manager Server on: [Server name] This error can occur when starting Process Recipe It has been observed that when clicking on Tools | Options and unchecking Enable Transition Manager, it makes the error message disappear for a while but sometimes it will return intermittently or after a reboot of the server
Solution: Aspen Process Recipe is only supported with 32-bit AspenCalc. If installing 64-bit AspenCalc, it is possible that Recipe Explorer will not function properly with all its features and user would see these errors. If this happens, then theSolution is to unregister 64-bit AspenCalc CalcScheduler and register the 32-bit Calc CalcScheduler. This can be done by the following steps: Close Recipe Explorer. If you only have one web client (PCWS) connected, then close the browser, OR go to the PCWS webserver/atcontrol web page > Configuration tab, uncheck the Sequencer checkbox and hit Apply to prevent any PCWS servers accessing the Recipe server. Open Task Manager and end any process with name starting with ATM_ if running Open InfoPlus.21 Manager and shut down the IP.21 database Open Windows Services and Stop the AspenTech Calculator Engine service. Also record the Log On As user account, including domain name and password in the Log On tab of the service properties (needed when re-registering CalcScheduler below) Back on Task Manager, make sure the CalcScheduler.exe process is not running. Kill it from Task Manager if still running. Open a Command Prompt as Administrator. Unregister the 64-bit version of CalcScheduler.exe: First navigate to the 64-bit bin directory by running this command: CD C:\Program Files\AspenTech\Aspen Calc\bin Then uninstall by running this command: CalcScheduler.exe -uninstall Re-register the 32 bit version: First navigate to the 32-bit bin directory by running this command: CD C:\Program Files (x86)\Aspen Calc\bin Then re-install by running this command: CalcScheduler.exe -install DOMAIN\Username Password NOTE: the -install command can be followed by the username and password if you want to register it to run as the InfoPlu.21 Task Service user account. OR you can change it manually with Step 10 below and run this command just with -install. In Windows Services, if you did not install the 32bit CalcScheduler.exe with the correct username and password, change the logon setting of the AspenTech Calculator Engine service: Right-click and select properties of the service Go to Log On tab and select the radio button for “This Account” Enter the username and password of the Aspen InfoPlus.21 Administrator account (should be the same account that opens the MSSQL database and runs the Aspen InfoPlus.21 Task Service) Start the Aspen Calculator Engine service. Make sure it stays running. Open InfoPlus.21 Manager and start the database. Note: for versions prior to V12, Process Recipe requires InfoPlus.21 to also be 32-bit. For versions V12 and later, InfoPlus.21 should be 64-bit but AspenCalc must be 32-bit. Keywords: Process Recipe, AspenCalc, error, transition, manager, connect, database, 64-bit, 32-bit References: None
Problem Statement: This
Solution: explains where are the Plot Groups from a controller in PCWS stored. Solution The multi-Plot (plots view) My Plot Groups from PCWS are stored in the UserProfile.config file located in the path C:\ProgramData\AspenTech\APC\Web Server\Config The UserProfile.config Contains per User all the saved Plot Groups. This file will contain all the plot groups template and will call the information for the Controller tags, that is display In the PlotGroups plots. To back up this Plot Groups the UserProfile.config file must be backup. Then this file can be move from one Server to another. Keywords: PCWS,Plots References: None
Problem Statement: This
Solution: frames the meaning of the Bode Plot Axis and its application.Solution The Bode Plot represent the frequency domain response of the CV to and MV for a range of sine wave periods (Frequencies). This analysis converts the identify model into a frequency-based representation and show Bode Magnitude on Y-axis Vs Logarithm of Frequency on X-Axis (where time = 1/Frequency). The bode Plot also includes the +/- 2 sigma (95%) and +/- 1 sigma (65%) uncertainty range bands (error model bands) The uncertainty information is used during step testing to identify MV-CV model relationships that need additional data: - Narrow bands of uncertainty indicates an identify model of good quality - Regions where the uncertainty band is wide, indicate frequency ranges of greater model uncertainty. The Bode Plot starts at nearly zero frequency (which correspond to the steady state gain) at an equivalent period of one Time to Steady State. The dynamic gain of the bode plot will decrease with frequency, unless there are some oscillatory responses or disturbances in the model. On DMC3 and DMCplus the blue curve represents the best estimate for the nominal model. More details about the assessment of the Bode Plot can be found onSolution https://esupport.aspentech.com/S_Article?id=000033978 Keywords: DMC3, Bode Plot References: None
Problem Statement: Special character on alphabet character as ( ‘ ) , i.e. á, on the variable description can lead to application problems after the controller is deployed. This behavior can lead to issues as the status of the controller remains on start, no capture of any application mode change (Control, Calibrate) and it could not run steady-state
Solution: . This problem can affect also during simulate step. Solution The way to prevent and fix this issue is just by removing this kind of characters from the variable descriptions. The Major problem with this issue is that this error is not captured by any pop up window or log. However, this issue can be noticed from the simulate stage by noticing no change in the state of the controller and noSolution performed. If after verifying that all descriptions are clear and the controller still having problems. Please contact support for further assistance Keywords: DMC3, PCWS, Simulate References: None
Problem Statement: The present
Solution: list what are the files that should be back up when migrating PCWS to a new version and server. Solution Typically, there is just a few items that needs to be migrated for the Production Control Web Server. In fact, the files that might cause more interest are any plot files (.xml files) that can be saved for AspenWatch history plots. For this file keep in mind that you will have to modify the Data Source names used inside these xml files if the data source name for your new Aspen Watch server is different. The files you may want to copy are under the following paths: C:\inetpub\wwwroot\AspenTech\ACOview\plots C:\inetpub\wwwroot\AspenTech\ACOview\reports C:\inetpub\wwwroot\AspenTech\ACOview\rtoplots C:\inetpub\wwwroot\AspenTech\ACOview\rtoreports In addition, the following path contains a copy of the xml. plots saved from KPI trends: C:\inetpub\wwwroot\AspenTech\Web21\plots In case the PCWS display has been customize you can find the user.display.config. files under : C:\ProgramData\AspenTech\APC\Web Server\Products Finally, If you have columnset customizations, it may be easier to just use the web interface to re-create these changes. Keywords: PCWS References: None
Problem Statement: When GDOT has an error reading a tag.parm we get the log file message: Read error for OPC item…. Followed by: Execution skipped due to bad values (<iteration count>). Regarding at this how are the prediction handled after the read error stop and all variables back to good status.
Solution: For a bad CV, prediction continues as normal, but the prediction bias is fixed. Once the bad CV becomes good again, the bias update is simply resumed. In the case of a bad MV, prediction continues as normal using any other MV that affects the CV and the bias is updated as normal. If there is no movement in any valid MV signal that has an effect on the CV, then any change to the CV is absorbed in the initialization cycle for just that MV in order to prevent a prediction bumb. Keywords: GDOT, Predictions, Manipulated Variable, Control Variable References: None
Problem Statement: The Scheduled “Autorun” operations for PID groups explicitly uses the default “SQLplus on localhost” ODBC connection, which Is created by the AspenWatch server software installation. When the ODBC connection is not properly setting this can lead to problems as PCWS not showing results under PID loop analysis after setting group configurations. In addition, by checking the PID watch logs located in the following path: C:\ProgramData\AspenTech\APC\V10.0\Builder\PidResults It can be shown the following messages: “PID watch AutonRun ODBC Server Connection Error” and “PID watch auto-scheduled End Program”.
Solution: To solve this issue, it is necessary to create an additional ODBC connection on the AspenWatch Server. This connection can be identical to the existing one, except replacing the ODBCS data Source with name “SQLplus on localhost” The new ODBC connection should look as it follows: ODBC Data Source Name: SQLplus on localhost Description (Optional): PIDWatch SQLplus ODBC Data Source Aspen Data Source): --This should be the same as you use for you current “SQLplus on Aspen AW server”-- Keywords: PIDwatch, PCWS , ODBC References: None
Problem Statement: MPF: Lock Attempt Timed out can be caused due a lot of circumstances as is mentioned on
Solution: 000015222 “Scenarios for causes and reSolutions for MPF LOCK error messages”. In addition to the described details in the referSolution, the Antivirus also can lead to problems of MPF: Locked attempt Timed Out ThisSolution frame as a first approach the Antivirus exclusions of some AspenTech Folders. Solution To avoid or prevent an MPF:Lock Attempt Timed Out caused by the antivirus and/or backups it is necessary to set the following exclusion: -C:\Program files (x86)\ AspenTech -C:\Program Files (x86)\Common Files\AspenTech Shared -C:\ProgramData\AspenTech Keywords: MPF, Antivirus, DMCplus References: None
Problem Statement: Understanding of the mechanisms involved, and data flow can be helpful in trying to diagnose problems with PIDwatch.
Solution: PID Watch Configuration Some information is provided below for getting the various system components configured to get the PID Watch automated group scheduling functionality working. 1. If upgrading the software installation, the first step is to apply the latest available Service Pack and/or ER software updates on the appropriate system servers (including Aspen Watch Desktop and Aspen Watch Server software on the AW host machine, and ACO Base updates on the Production Control Web Server host). 2. Second step, after installing the Aspen Watch Server software (Service Pack) is to run the Install Database Configuration utility from the Start->Programs->AspenTech->Aspen Manufacturing Suite->Aspen Watch->Install Database Configuration Start menu item. This will require a stop and restart of the InfoPlus.21 database when complete to update the Aspen Watch records. 3. Third step is to configure an ACOView shared folder on the Production Control Web Server host. You should setup this \Inetpub\wwwroot\AspenTech\ACOView shared folder on the Web Server host with Full Control permissions access only for the Aspen Watch database administrator (InfoPlus.21 Task Service user), or alternately for a local Administrators group if the Aspen Watch administrator is a member of this local Administrators group. 4. Although not a required part of the PID Watch configuration - note also that the shared folders on the Aspen Watch/InfoPlus.21 database host should be configured for similar access permissions. The default software installation will configure the directories listed below for ?Everyone? Full Control permissions. The ?Everyone? share permission should be removed to reduce the chance of virus infiltration, and replaced with a local Administrators group with Full Control permissions which includes the InfoPlus.21 database administrator user. These shared folders include: \Program Files\AspeTech\InfoPlus.21\c21\h21\agghis \Program Files\AspeTech\InfoPlus.21\c21\h21\arcs \Program Files\AspeTech\InfoPlus.21\db21\etc \Program Files\AspeTech\InfoPlus.21\db21\Group200 5. The ADSA Client Config Tool is used to configure the Aspen Watch server host appropriately as a Public Data Source. An ODBC System Data Source is then configured using SQLplus on localhost and selecting the appropriate Aspen Watch server as the Aspen Data Source. Note: when running in the automated scheduling mode, the PID Watch program assumes (or uses) the ?SQLplus on localhost? ODBC connection 6. A last important note is not to create PID Definition group record names containing spaces, or special non-alpha numeric characters - use underscores rather than spaces for record naming conventions. Note: that entry of spaces and ?special? characters for the PID Definition Group Record Name is now prevented in the Production Control Web Server Group PID Group Configuration display. If these configuration steps are followed, then the automated PID Watch scheduling program has a high chance of working successfully. Additional issues, including network permissions, may prevent the software from working correctly - so information is provided in the sections below for trouble-shooting of potential issues. PID Group Scheduling Trouble-shooting The automated PID Group scheduling program is performed by SQLplus query record AwPIDGrpSched, which should run at a 1 minute execution frequency. A copy of the saved SQLplus QueryDef record is located in the Program Files\Aspen Watch\etc\sql\Base folder. The AWPidGrpSched program determines the activities performed for each PID Group scheduling record, based on configured Start/End time and current system time. The statuses reported by the PID Group scheduling record are defined by Select10Def AW-PIDGRP-STATUS record: 0 Aborted 1 Idle 2 Scanning 3 Running 4 Completed 5 Plot Data 6 Reset Time The PID Group scheduling record will indicate a status of Idle when the current system time is greater that the PID Group scheduling record Start Time. PID Loop scanning When current System Time > PID Group Start Time, the program sets the scan mode to FAST SCAN for PID Definition records defined in the PID Group scheduling record. PID Group scheduling record status is set to Scanning. Fast scanning completed When current System Time > PID Group End Time, the appropriate PID loop(s) scan mode is reset to NORMAL scanning rates. The output analysis filenames are created and saved in the PID Group scheduling record AW_LAST_SAVFILE field. On completion, the status is set to Running. Check the PID Group scheduling record AW_LAST_SAVFILE field when this step is completed, for an appropriate output analysis filename of format: AWRecName_date_time_PidAnalysis.txt where: AWRecName is the PID Definition Record name date is the AW_PIDGRP_START 7-character date, for example: 16jan22 time is the AW_PIDGRP_START 4-character time, for example: 2200 AutoRun of PID Watch program At PID Group End Time + Offset, the pidwatch.exe program is auto-scheduled to perform an analysis or [AUTORUN] of the pidwatch.exe program in background mode for all PID Group defined tags using the Start/End Time collection interval as configured in the PID Group scheduling record. On completion of this step, the PID analysis text files are saved in the Aspen Watch\PidResults folder and the status is set to Completed. If successful, the PID analysis results are saved in the PID Definition record fields, and will be updated in the Production Control Web server PID Loops Analysis web display. In the event that this step fails, troubleshoot this operation by manually running the pidwatch.exe program in background mode from an MS-DOS prompt using the following commands shown below. Open up an MS-DOS window, then path to the Aspen Watch folder: cd %ASPENWATCH% pidwatch.exe /cmd GroupRecName 1 PidResults where: GroupRecName is the PID Group Scheduling Record name PidResults is the folder where the analysis results files are created argument = ?1? is used for AUTORUN Note that you will need to have valid tagnames, and valid Start and End Times when running this command from an MS-Dos window, or the program will silently fail. As also noted earlier, the automated scheduling program assumes that there is an ODBC System Data Source configured as SQLplus on localhost on the Aspen Watch server. The program will likely fail if this is not the case. AutoLoad of PID Watch analysis results file In this phase at End Time + 2 * Offset, the pidwatch.exe program is auto-scheduled to perform an [AUTOLOAD] of the previous analysis results files in background mode for producing PNG plot files. Again, the PNG files are saved in the Aspen Watch\PidResults folder. On completion the status is set to Plot Data. In the event that this step fails, the pidwatch.exe program can be manually run in background mode from an MS-DOS prompt using the following commands shown below. First make sure that there are PID analysis results files in the PidResults folder from the previous step. The results filenames should match the names in the PID Group scheduling record AW_LAST_SAVFILE fields. Then, open up an MS-DOS window, and path to the Aspen Watch folder: cd %ASPENWATCH% pidwatch.exe /cmd GroupRecName 0 PidResults where: GroupRecName is the PID Group Scheduling Record name PidResults is the folder where the analysis results files are created argument = ?0? is used for AUTOLOAD Copy plots to the Web Server In this phase at End Time + 3 * Offset, the program first attempts to copy the PNG files in the Aspen Watch\PidResults folder to the shared ACOView\pidimages folder on the Production Control Web Server. It is important that this folder on the Production Control Web Server is shared granting access to the InfoPlus.21 Task Administrator user on the Aspen Watch server, as the SQLplus program is attempting to copy these files. If this step fails, it is likely either that the ACOView folder share permissions have not be configured, or have some permissions issue. To trouble-shoot, first try to copy PNG files on the Aspen Watch server to the Web Server manually from an MS-DOS window. For the manual file copy attempt, make sure that the user is the same username/password account used for starting the Aspen InfoPlus.21 Task Service (and used for executing the AwPIDGrpSched SQLplus query). To verify manual file copy from the Aspen Watch Server to the Production Control Web Serve host, open up an MS-DOS window, and path to the Aspen Watch\PidResults folder: cd %ASPENWATCH%\PidResults copy <filename>.PNG \\WebServerName\ACOView\pidimages\*.* Following the file copy, the next step in this phase of the program is to assemble an HTM web page made up of the PNG files. This HTM page is temporarily created in the InfoPlus.21\db21\Group200 folder and then copied to the ACOView\reports\PID Analysis folder on the Production Control Web Server. On completion, the status is set to Reset Time. Note: Failure of the file copy step in earlier versions of the software would result in an aborted program status, and would not subsequently reset the Start/End Times. This behavior has been corrected and failure to copy files from the Aspen Watch Server to the Production Control Web Server due to permissions issues or otherwise, now results in ?silent failure? for file copy, and the AwPIDGrpSched SQLplus query continues to the next Reset Start/End Times program step. A message that the file copy to the Web Server host failed is noted in the Log file. Reset Start/End Times The last step in this phase, is to reset the Start Time and End Times in the PID Group scheduling record based on the Recurrence interval, and then calculates a delta performance index result. On completion of this step, the status is set to Idle. If this copy of files to the Production Control Web Server fails, the Start/End Times are likely not reset and the program will indicate an Abort status. Note: A Log file is now created in the Aspen Watch\PidResults folder to track the status the various auto scheduled PID Group tasks, using naming convention: GroupRecName.log On completion of a successful auto-scheduled PID Group run, the contents of the Log file will contain: PID Group Scheduling Log File for: GroupRecName Setting FAST Scan Mode for PID records in PID Group Schedule record: GroupRecName Setting NORMAL Scan Mode for PID records in PID Group Schedule record: GroupRecName Running pidwatch.exe AutoDBRun program for: GroupRecName Running pidwatch.exe AutoDBLoad program for: GroupRecName Directory exists. Create PID Analysis folder on the Web Server Reset Start and End Time for PID Group Schedule record: GroupRecName In the event that the program does not successfully complete, then the last message contained in the log file or the specific error message file that is recorded will assist in troubleshooting issues. Additional debug trace log files for the PID Watch AutoRun and AutoLoad tasks have been added to the recent Service Pack ER2 release. These debug log files are also created in the Aspen Watch\PidResults folder, of filename convention: GroupRecName_DbgAutoRun.log GroupRecName_DbgAutoLoad.log Additional Information PID Analysis Results mapping to PID Definition Record Aspen Watch Definition record parameter mapping of performance metrics calculations: Description AW_PIDDef Record parameter Steady-State Performance Index AW_ACTCPIP Performance Index calculation AW_ACTCTLP OP Saturation percent AW_ACTDSTP Oscillation Index calculation AW_ACTERTP In Service percent AW_ACTSRVP Delta Performance Index Change AW_ACTTRKP Oscillation Cycle Time AW_ACTUPDP Aspen Watch Definition record parameter mapping of performance metrics limits: Description AW_PIDDef Record Parameter Initial (default) value Keywords: None References: Time AW_PCTCPIP 0.25 * TTSS Performance Index warning limit AW_PCTCTLP 80% Maximum OP Saturation percent limit AW_PCTDSTP 10% Performance Index critical limit AW_PCTERTP 50% Minimum In Service percent limit AW_PCTSRVP 90% <unused> AW_PCTTRKP <unused> AW_PCTUPDP
Problem Statement: How to Export (Extract) vectors from PCWS? As an alternative to
Solution: 122675 Is there a way to generate a CLC or extract a list of vectors from Aspen Watch?Solution In the PCWS ?Online? Tab select any Variable?s ?History Plot?, in the pop-up window go to the ?Export? Tab / ?Tag History? and set the Start/End & Interval Parameters. The vectors are available as a zip file (found in :\inetpub\wwwroot\AspenTech\ACOView\Export) If you want to export more than one variable, in the pop-up window go to the ?Tag? Tab / ?Tag Selector? and choose the Variable, PIDs and Miscellaneous Tags required. Plot them and then repeat the process previously mentioned. If you get an ?Unexpected Error? message, the AcoView folder needs to be shared with the IP21 task users. Keywords: Production Controller Web Server, Aspen Watch, Extract, Export, PID, Miscellaneous Tags References: None
Problem Statement: This
Solution: regards at the following questions: If the AtACOApp Windows Share is remove, will the rest of the system still work properly? Does Aspen Watch depend on this share existing for it to operate properly? Solution The AtACOApp Window Share is no longer required since Version 8.0. Thus, The AtACOApp Window Share is no further needed for AspenWatch to run properly as well as the rest of the system. However, it is a good practice, regarding at the company requirements, to restrict the access to the share folders in case you may want to keep this folder. Keywords: AspenWatch, AtACOapp References: None
Problem Statement: What is the best way to back up Aspen Advanced Process Control servers and configure virus scanning?
Solution: Aspen Process Control Servers include machines running the online applications, to include DMCplus, SmartStep, AspenIQ, AspenWatch, Aspen Process Controllers, NonLinear controllers, State Space Controllers and Apollo applications. The standard approach for backing up physical APC servers, and virus scanning, has always been to omit the AspenTech directories. The newest versions of software and servers requires that we also consider the following: Since V7.2 of the APC software, c:\Program Files\AspenTech\APC\AC Online\bin no longer need to be exclused from virus scans or from backups. This folder holds the executables for the advanced control system and these are not user changeable. The second location to consider is C:\ProgramData (Win2k8) directories need to be excluded from virus scanning and backup, as that is where the online softwas has the MPF regions stored and mapped. TheseSolutions have been provided as attachments to thisSolution. Consideration of the virtual environment There are several strategies for backing up when using a virtual machine. * Follow Virtual Machine software vendors recommendations for automated and scheduled backups. * Use snapshots for event based back-ups After a new application is commissioned Before and after upgrades (MS or AT) * Treat like a physical machine * Use your backup software to grab the files that are changing on the APC machine (application files) Keywords: None References: None
Problem Statement: Are there recommendations for moving KPI's in Aspen Watch when upgrading/migrating?
Solution: Here is a basic procedure to preserve their KPIs in Aspen Watch when moving the database to a new server. To migrate Aspen Watch to a new version and server: 1. Install the latest version of APC Performance Monitor on the new server and apply any patches. This installs the latest version of InfoPlus.21 and all required components for Aspen Watch. NOTE: Don't run Install Database Configuration yet (that will be done in the last step). 2. Next, migrate the InfoPlus.21 database and history filesets to the new server (there are very good instructions in IP.21 KBSolutions for this). This involves: - Copying over the snapshot file, config.dat file, and history filesets. - Then run h21chgpaths.exe to rename the history archive paths to use the new machine name. - Upgrade the InfoPlus21.snp file to match the new version of InfoPlus.21. 3. If you are using Aspen Watch KPIs, and you have your own KPI configuration file that you have customized, then copy that to the new server. 4. Copy over everything under the \Apps folder from the old Aspen Watch server to the new one. 5. Configure the Cim-IO Logical devices file (and the services file for the corresponding port numbers) on the new server. Use Cim-IO Test API to confirm the connections. 6. Once the new database is up and running (with migrated filesets and upgraded snapshot), run Install Database Configuration. This will: - configure the Aspen Watch-related external tasks on the new system - upgrade the database to use the new Aspen Watch features - upgrade the KPI records (it should preserve whatever records you have already developed). NOTE: as long as you keep a copy of your snapshot file, you can always revert to it if something does not go right. To migrate PCWS to a new version and server: There is typically very little that needs to be migrated for PCWS. If you have columnset customizations, it may be easier to just use the web interface to re-create these changes. The only folders of interest might be any plot files (.xml files) that you saved for Aspen Watch history plots. Note that you will have to modify the Data Source names used inside these xml files if the data source name for your new Aspen Watch server is different. The C:\inetpub\wwwroot\AspenTech\Web21\plots contains a copy of the xml. plots saved from the KPI trends In addition, you may want to copy the files under C:\inetpub\wwwroot\AspenTech\ACOview: \plots \reports \rtoplots \rtoreports Finally, the user.display.config files located under C:\ProgramData\AspenTech\APC\Web Server\Products can be copy from this directory in case the PCWS display has been customize. Keywords: customization, formatting, datasource References: None
Problem Statement: External Target are common request for APC users. This is a quick guide on how External Target can be set up on DMC3 Builder and DMCplus build.
Solution: Setting External Targets on DMCplus: To enable External Targets in a CCF file, select the Tools menu and then select Options. Then Select General tab and here It can be found the option External Targets. This option will allow to enable the External Target for the controller and it allows to select between three options: Not used. - Which will not take into consideration ET Full RTO. - Real Time Optimization Values Limited Use (IRV). - Ideal Resting Values Please Refer toSolution https://esupport.aspentech.com/S_Article?id=000015581 “How do I pick between Full Use (RTO) and Limited Use (IRV) type when enabling External Targets?” for more detail information about RTO and IRV. In addition, this window will allow to enable the ET in the Online Controller. The next step is to select variable(s) which is desired to Enable the External Target. By selecting the variable, it will notice that the External Target flag is now active in the top ribbon. check on this flag and this will now display new parameters for the variable selected. Finally, choose the ETCV parameter and double click on it. On default value change it to 1 and in Tag name you must write the Tag that will be write/read by the external software. In addition, in the Keyword you can also change what kind of action the external software will have on such Tag. Also, you need to specify the CIM-Io device and Source. Setting External Targets on DMC3: On the DMC3 file you can set the External Target from the Optimization Tab from the controller tree. On the Optimization tab go to the top ribbon and click on Configure Optimizer, then select the ET option on the Target type selector in the Case Actions Tab. Then it must be selected which variable will have the External Target, this can be done by selecting the External Target type on the Target option from each variable. On the simulation Tab, click on the variable which have the External Target and find the TARGET attribute. Also, It will be noticed that the combined status Is change to Target. In the case of DMC3 the TARGET attribute is the equivalent to ETCV described above for the CCF file. On the deployment tab on the controller tree. In this Tab select the Variable that have the ET and it will be noticed that the attribute TARGET is shown in the variable detail Panel. However, if is it not showed, this parameter can be enable from the option customize on the top ribbon. Click on Customize and find the option TARGET then check it. Finally, the information of IO Souce, IO Tag and IO database should be fill as it was done in the case of a CCF file. Keywords: DMCplus, DMC3, External Target References: None
Problem Statement: In DMC3 Builder is possible to export vectors and calculated vector in format .dvp. However, this format cannot be read on Microsoft Excel or Aspen IQ model. This represent a problem since calculated vector cannot be include in IQ model dataset.
Solution: Although that .dvp format cannot be used in IQ model we can transform this format into a .vec format using DMCplus builder. Once the format is change it is possible to use the IQ model Excel Add-in to include send this information to Aspen IQ model. 1.- Select the Vectors (Could be any Vector including Calculations) and Export them from the DMC3 Application. This vector will be exported as .dvp format. 2.- Open DMCplus Model and select DMC3 application. Then Import the .dpv vectors file. Select the desired vector and export them again but select the extension .vec. 3.- Import the .vec Vectors to an Excel Spreadsheet. You can do this by selecting tab data and then select “from Text”. Once the Vector are imported, just do the required changes using the IQmodel Excel Add in to import the data to Aspen IQ Model. Keywords: DMC3 Builder, DMCplus, Aspen IQmodel References: None
Problem Statement: This knowledge base article illustrates how to enable Aspen Watch Performance Monitoring in Aspen DMC3 builder
Solution: Aspen Watch Performance Monitor is a tool that allows for continuous monitoring of the performance and integrity of applications deployed to an online application server. This feature is available for all three application types (FIR, MIMO, and MISO) developed in Aspen DMC3 Builder, provided that the application has been developed to the Controller and Plant (or Deployment) stages and is ready for deployment to an online application server. Enabling Aspen Watch monitoring is a straightforward process that can be done in two ways: Enable Aspen Watch monitoring during deployment of an application: To enable Aspen Watch monitoring during deployment of an application, follow these steps: Access the Deploy dialog box in Aspen DMC3 Builder Select the Enable Aspen Watch monitoring check box Click the Deploy or Redeploy button Note: Ensure that Aspen Watch Performance Monitor and Aspen Production Control Web Server are installed and operating at your site before enabling Aspen Watch monitoring. Enable Aspen Watch monitoring by setting an entry in the Application Details dialog box: To enable Aspen Watch monitoring by setting an entry in the Application Details dialog box, follow these steps: Access the Application Details dialog box in the Simulation window of Aspen DMC3 Builder Locate the entry for Enable Monitoring and set it to Yes Note: Set the Enable Monitoring entry to Yes only if Aspen Watch Performance Monitor and Aspen Production Control Web Server are installed and operating at your site. Once Aspen Watch monitoring is enabled, users can access the web-based user interface provided by Aspen Watch Performance Monitor to monitor the performance and integrity of the deployed application. The key performance indicator (KPI) plots are displayed in the History tab of Aspen Production Control Web Server, providing color-coded indicators of the performance of various facets of the controller's characteristics, along with drill-down access to detailed information and diagnostics. Keywords: Performance Monitoring, DMC3, Deploy, Enable. References: None
Problem Statement: sometimes users may encounter a Database full error message when using Aspen Watch Maker. This error message can occur during any operation that increases the number of records or memory size of the database. In this article, we will explain how to increase the maximum database memory size to fix the Database full error in Aspen Watch Maker.
Solution: To increase the maximum database memory size, follow these steps: Launch InfoPlus.21 Manager. If the InfoPlus.21 database is running, click the STOP InfoPlus.21 button to shut down the database. In the Defined Tasks list, double-click TSK_DBCLOCK. The New Task Definition area displays the task attributes. Modify the maximum database word size in the Command line parameters text box (this is the second item in the text box). The minimum value allowed in Aspen Watch is 30000000. You may want to increase it to 35000000 or 40000000 if the minimum value is not enough. See the Aspen InfoPlus.21 Administration Manual for more information. After making this change, click the UPDATE button to save the changes. You may want to double-click the task again to verify that the changes were accepted. Start the database by clicking the START InfoPlus.21 button. By following these steps, you can increase the maximum database memory size and fix the Database full error in Aspen Watch Maker. It is important to note that the Aspen InfoPlus.21 Administration Manual provides more information about this process, and it is recommended that users refer to this manual for further guidance. In conclusion, the Database full error in Aspen Watch Maker can be frustrating for users, but it can be easily fixed by increasing the maximum database memory size. By following the steps outlined above, users can increase the database memory size and continue using Aspen Watch Maker without any further issues. Keywords: Database full, Aspen Watch Maker, memory size, DBCLOCK References: None
Problem Statement: When and why should I declare a component as a Henry component?
Solution: In an activity coefficient based property method such as NRTL, where we cannot use an equation of state (EOS) on the liquid phase due to the non-ideality of the system, we derived the fugacity of a component in the liquid from a reference measurement of the equilibrium that we used as a reference fugacity. Then, we account for the non-ideality of the mixture by applying a correction factor, the activity coefficient (gamma): For a solvent, the liquid reference fugacity is defined as the pure component vapor pressure, and the values come from an empirical correlation, usually the extended Antoine equation. In an EOS based property method we used, instead, the equation of state in both vapor and liquid phases to directly compute the fugacities. For non-condensable light gases that are usually supercritical at the process conditions, the vapor pressure is meaningless and therefore it cannot serve as the reference fugacity. The reference state for a dissolved gas is redefined to be at infinite dilution, and the equilibrium is described using the Henry's constant that becomes the reference fugacity. The activity coefficient is converted to infinite dilution. When we select Henry components on the Methods | Specifications form, we are telling Aspen Plus to calculate VLE for those components using the Henry's constant. If we do not, and we have supercritical components, meaning components above the critical temperature at the process conditions, Aspen Plus will extrapolate the saturation pressure (PLXANT) beyond the critical temperature, which is meaningless and may lead to very incorrect results. Hence, the recommendation is that a component above its critical temperature should be handled as a Henry component. Some people prefer to model the solubility of other sparingly condensable components (less than 3 mMolal concentration of the gas in the liquid) as Henry components, because they find easier to adjust the Henry parameters to fit their data, especially when the composition in the liquid for those components is small. We could achieve similar results by fitting the gamma model binary parameter instead of Henry. Keywords: Henry components, fugacities. References: None
Problem Statement: This KB explains how to create AspenOne Process Explorer SQLPlus report that has links to launch tags on the list.
Solution: Step1: Create a procedure def record that contains the information for the table. Here is an small example with tagname and description, where the tag name is built to be a link. Done in a procedure so that we can build a temp table and then select the results from the temp table. Procedure name and arguments can be whatever you want. The purpose of the procedure is to show how a link can be created for a SQLplus report where the result is HTML. PROCEDURE GetReportFormat( hostName char(30), dataSource char(30), queryString char(30)) LOCAL tempName; SET LOG_ROWS = 0; DECLARE LOCAL TEMPORARY TABLE MODULE.TEMP(Name CHAR(256), Description CHAR(32)); FOR (SELECT name AS Name, IP_DESCRIPTION AS Description FROM IP_AnalogDef WHERE name LIKE queryString ) DO tempName = '<a target=_blank href=http://' || hostName || '/ProcessExplorer/aspenONE.html?ProcessExplorer?nav=true&&src=' || dataSource || '&&tag=' || name || '/>' || name; INSERT INTO MODULE.TEMP values (tempName, description ); END; SELECT * FROM MODULE.TEMP; END; Step 2: Create a SQLplus report that queries the Procedure record. Add a SQLlus Script Section to a report. Add the query. select * from GetReportFormat('Host Name','ADSA Data Source','ATC%') Set Returns to HTML Step 3: Generate the report As you can see the link shows up when you hover over one of the tags. Keywords: SQLPlus report Links References: None
Problem Statement: As part of the APC Aspen Watch feature for Centralized Benefits Monitoring, the following license key can be seen on the License Profiler: License Name: SLM_RN_APC_MISC_EM Product Name: Aspen Watch Centralized Monitoring / Aspen Watch Data Manager * Token Consumption: 1 token per 75 tags *NOTE: starting in V14, Aspen Watch Centralized Monitoring was renamed to Aspen Watch Data Manager, and this product name change does not impact any functionality or licensing. The user might see a “Licenses in Use” and “Tokens in Use” count for this license key, even when Centralized Benefits Monitoring is disabled on Aspen Watch Maker or unchecked for Collection on the PCWS History Tab. What can cause this behavior?
Solution: The Centralized Monitoring feature also takes into account the use of replicating miscellaneous tags, which can account for the license and token usage. Starting in V10, Centralized Licensing was added. Starting in V11, the ability to replicate Miscellaneous Definition Records (AW_MSCDef) was introduced as a new feature and it was included under Centralized Monitoring licensing scheme. Starting in V14, the Tag Data Transfer feature adds the ability to transfer selected Aspen Watch data to another record for manipulation or replication. These transfers also count towards the total, unless the destination tag is a replicated Miscellaneous Tag. Licenses in Use Count This is the logic used to calculate the “Licenses in Use” count when the Replication switch for the overall InfoPlus.21 database is enabled: Centralized License Count = {Count of Miscellaneous tags in High ID tag groups (ID > 99) AND not replicated} + {Count of Replicated Miscellaneous Tags} For V14 and later: above + {Count of Tag Transfers that are not replicated Miscellaneous Tags } This is the logic used to calculate the “Licenses in Use” count when Replication is Disabled: Centralized License Count = {Count Miscellaneous Tags in High ID Tag Groups} For V14 and later: above + {Count of Tag Transfers that are not replicated Miscellaneous Tags } Tokens in Use Count When Replication is enabled, Replicated Miscellaneous tags consume 1 token per 75 tags. Also, the base Centralized Monitoring license consumes 1 token just for enabling the option in the Aspen Watch Maker preferences dialog and that includes up to the first 75 replicated Miscellaneous tags. Enable/Disable Centralized Monitoring in Watch Maker > Tools > Set Preferences dialog window: Note: The APC Benefits Monitoring is associated to the Centralized Monitoring, but there is no additional token charge from the APC side for this feature, just from aspenONE Process Explorer. How to Control the License and Token Usage for Centralized Monitoring In order to control the license and token usage, you can follow these steps to disable replication for Miscellaneous tags: Open InfoPlus.21 Administrator and navigate to Definition Records > AW_MSCDef Right-click on the definition record and select Properties: In the Properties dialog, go to the Replication tab. Clear the checkbox here for “Enable replication when an AW_MSCDef record is created” and hit Apply and OK. If replication of some of the Miscellaneous tags is desired, you can either: Disable Replication for the overall AW_MSCDef record (as shown above) and then enable it for those specific tags that need it OR Enable Replication for the overall AW_MSCDef record and then disable it for the individual tag records that don’t need it To enable/disable replication for individual miscellaneous tag records, right-click on their name and select Properties. Navigate to the Replication tab and toggle the switch for “Enable tag replication”. Keywords: centralized, benefits, monitoring, miscellaneous, license, token, consumption, usage, AW_MSCDef, misc, tags References: None
Problem Statement: Why the pressure result is different between pipe segment and hydraulic in Aspen HYSYS?
Solution: The above pressure difference comes from the friction factor calculation. You will need to change the Friction Factor method to Churchill for the Hydraulics sub-flowsheet. Once done, the pressure result will be the same for both. Please note the followings: 1. Pipe segment calculates the pressure drop with the Darcy-Weisbach Equation and with Churchill method for friction factor when the flow is one phase. 2. In Pipe segment there is no chance to change the friction factor calculation method, that’s only possible in Hydraulics sub-flowsheet. Aspen Hydraulics pipeline and hydraulic network simulations can be solved in Steady State mode or Dynamic mode on a single network, with the ability to switch between the two modes and also switch between solvers. Keywords: Aspen HYSYS, Pipe Segment, Hydraulic References: None
Problem Statement: How to get rid of the following error message Compile error in hidden module: modRegFunctions in Aspen Simulation Workbook?
Solution: In order to get rid of the following error message: You will need to use the regsvr32.exe command. It seems that originally it couldn't locate the directory on its own. The command directory needs to be redirected to C:\Program Files (x86)\AspenTech\Aspen Simulation Workbook V12.1\ASWXLAddinLoader.dll Click on the start menu and type CMD and make a right click on it and select RUN AS ADMINISTRATOR. Then type: Regsvr32 “C:\Program Files (x86)\AspenTech\Aspen Simulation Workbook V12.1 \ASWXLAddinLoader.dll” You will need to double check the directory and make sure that it is pointing right. Keywords: Aspen Simulation Workbook, Error, modRegFunctions References: None
Problem Statement: Starting in V14, the SLM License Terminate Session tool allows you to terminate licenses for specific users and machines for specific products. Terminating a session for a particular user and machine terminates all products and returns the associated licenses.
Solution: To terminate a session, launch the SLMLicenseTerminateSession located under C:\Program Files (x86)\Common Files\AspenTech Shared\SLM Administration Tools. Select the user/s and click Terminate. The following prerequisites must be met for the SLM License Terminate Session tool to function properly. You must run V9.6 or higher Sentinel software. V14 or higher SLM Client and Server components must be installed. Note: MSC (Aspen Manufacturing and Supply Chain) licenses may be automatically re-acquired even after you terminate the sessions. Kindly refer to attached excel spreadsheet for the list of products that can be terminated. Keywords: None References: None
Problem Statement: This knowledge base article provides you with two different methods to populate tags in Aspen Cim-IO transfer records. Using an Aspen SQLplus Query Starting from V7.3, a new Excel Add-in called Aspen Configuration is available that can be used to manage addition/deletion of tags from Aspen Cim-IO transfer record.
Solution: 1. Using an Aspen SQL Plus Query Create a text file using Notepad and save it as *.txt (e.g. C:\input.txt), as shown in the table below. Note the use of the comma as a column delimiter. For this exercise, make sure you save your file as input.txt in the root directory of the C: drive on your local machine. interfacetag1,Valve3 IP_INPUT_VALUE interfacetag2,Valve4 IP_INPUT_VALUE interfacetag3,Salttanklvl IP_INPUT_VALUE interfacetag4,Saltflow IP_INPUT_VALUE interfacetag5,Recircflow IP_INPUT_VALUE interfacetag6,Productflow IP_INPUT_VALUE Using the SQLplus Query Writer, create and execute an SQLplus script to read the contents of the text file you have just created: iosimul_get1.io_record_processing = 'OFF'; -- the asumed name of the Transfer record is iosimul_get1 SET EXPAND_REPEAT = 1; -- repeat area of the Transfer record will automatically be increased. INSERT INTO iosimul_get1 (io_tagname, io_value_record&&fld, io_data_processing) -- the above fields will be inserted into the repeat area of the record, -- note the double ampersand, seeSolution # 66550 SELECT -- the above fields will be updated with the following selection: TRIM(SUBSTRING(1 OF LINE BETWEEN ',')), -- the io_tagname needs to be the first column from the textfile TRIM(SUBSTRING(2 OF LINE BETWEEN ',')), -- the IP.21 tagname AND the field need to be the second column 'ON' -- by this 'ON' command, the data processing is switched 'ON' FROM 'C:\input.txt' WHERE TRIM(LINE)>''; -- the textfile is C:\input.txt iosimul_get1.io_record_processing = 'ON'; -- afterwards the record processing is switched ON again iosimul_get1.io_activate? = 'YES' -- finally, the transfer record is activated. Note the double quotes. 2. Aspen Configuration add-in (V7.3 and Higher) For detailed procedure on how to enable the configuration and process data add-ins in V7.3 and higher see How do I add ribbon-based Excel Add-Ins to spreadsheets? Note: This ribbon based add-ins are available only with Excel 2007 and higher. After the configuration add-in is enabled, it will be displayed in the Microsoft Excel Ribbon. The steps 1 through 6 that need to be followed for adding the tags to the Cim-IO transfer record are illustrated in the Screenshot below. After Clicking Execute button a new window will open showing the progress on addition of the tags. You can choose to view the updates. Once it completes check the IOLLTagGetDef to verify the tags are inserted. Keywords: SQLplus, SQL+, query,populate, mass, configure, mass-populate, GET record, Transfer record, Configuration add-in References: None
Problem Statement: I am getting Error 0x80070057 initiating connection to “servername” using iis3atpd.dll. Also RTDB status configure monitor status is not showing “OK”.
Solution: This error it indicates that AtOMS connectivity with IP21 server is not established in the AtOMS logs. 1. Go to services and check the “Log On as” for the for the Aspen OMS Movement Monitor and Aspen OMS Tank Monitor. 2. Log in to your AtOMS server using this account. 3. Run the ADSA Client Configuration tool as administrator. 4. Make sure that your IP21 server is added as a directory server and the correct protocol is selected in the ADSA tab. 5. Go to “Configuration” tab and make sure that your IP21 data source is added. 6. Try trending the IP21 tags using Aspen Process Explorer or AspenOne Process Explorer and confirm that you are getting the data. 7. Run the Configure Tank Monitor (AtOMS) as administrator and enter the RTDB server name as your IP21 server name. 8. Run the Configure Monitor Statuses as administrator. You will see the below window. 9. Make sure that for both the Movement Monitor and Tank Monitor, RTDB status is “OK” Keyword : Error 0x80070057 AtOMS RTDB error AtOMS IP21 connectivity Keywords: None References: None
Problem Statement: What is basic information required for Dryer modelling in Aspen Plus?
Solution: Solid modelling is the feature which is available in the Aspen Plus. Several new requirements are coming from the customer and new potential customer are also available who request for solid modelling. Several manufacturing companies also wants to know if we can model Rotary dryer and other type of drying application in Aspen Plus. This article describes basic inputs which is required for the modelling of a dryer. In Aspen Plus under the solid handling application we have the option to model different types of dryer such as Spray dryer, fluidized bed dryer, rotary dryer, contact dryer etc. While answering the query for “if we can model dryer in Aspen Plus we need to ask for some minimum inputs from the customer” as follows. For the modelling of simple batch dryer in Aspen Plus following inputs are required: Feed Details: Temperature, Pressure, flow and composition Dryer Details: Type of operation: Continuous/Batch Gas Flow direction: Cross-Flow/Co-current/Counter current Solid holdup or solid residence time Cross sectional area of the dryer (this may be required in case of simulation of dryers) Solid moisture content basis: Wet/Dry Drying curves Critical and equilibrium solids moisture content Feed & Product compositions & PSD data Keywords: Rotary dryer, Solid handling References: s Following article mention several examples for dryers which are available in the Aspen Plus library. https://esupport.aspentech.com/S_Article?id=000061295
Problem Statement: This knowledge base article explains the steps needed to configure the Auto-Upload Tool (AUT) (V12 and higher) to automatically submit software usage logs to AspenTech. Note: for V8.7 AUT configuration instructions, please refer to KB 130332.
Solution: If you do not have the AUT installed, you may install it from the aspenONE Software License Manager. See in-depth installation instructions here: https://esupport.aspentech.com/S_Article?id=000100391 Quick installation steps are below: At the main Aspen Installer screen, click the Install and configure SLM software button. At the products list, check the Auto Upload Tool. NOTE: Check the HTTP Server option if your server will be used as a “Collection Server” for other license servers. Once the AUT is installed, launch aspenONE SLM License Manager, and then click the Auto Upload Tool button. Selecting Settings After launching the Auto Upload Configuration Tool: In the Admin Email Address field, type the email address for the SLM Server administrator. AUT uses this email address to notify you in case of a file transfer failure. An email notification is also automatically sent to the administrator when a log file is received at AspenTech In the Company Email Domain field, type your company’s email domain, for example, aspentech.com. This should be a valid company domain. Select one of the following options for Settings: Maintain current settings: Select this to keep your current settings. Use default configuration settings: The default configuration settings send usage logs every Friday at 6:00 PM local time using HTTPS, with medium privacy settings. Use custom settings: If you select this option, click Continue to specify your custom settings on the following screens. Configuring Custom Settings: Step 1 On the Custom Configuration (Step 1 of 2) screen, specify the following information: Frequency of log file transmission Privacy options Note: The SLM System Name section lists the system name of the SLM Server. Auto Upload Tool auto-fills this by reading the information from the license file on the computer. Frequency of Log File Transmission In the Upload Frequency field, from the first drop-down list, select whether you want files to be transmitted weekly or monthly, and specify the day of the week (if weekly). From the second drop-down list, select the upload time. Note: The Every Day interval is included for testing and/or troubleshooting. Privacy Options The Auto Upload Tool offers the following privacy options. By default, the Medium option is used. Low: User information is not changed. Medium: (Recommended) User information is converted into unidentifiable unique names. For example: User name John is converted to user1. Machine name TestPC1 is converted to machine1. IP Address 123.123.123.123 is converted to ip1.ip2.ip3.ip4. High: User information is converted into masked names to meet privacy regulations. This option requires you to use the same mapping file if you have two or more SLM Servers. This option is the same as the Scramble option from prior releases. For example: User name John is converted to x3y4ss. Machine name TestPC1 is converted to g74sr2. IP Address 123.123.123.123 is converted to es4.rts.p3t.au2. Configuring Custom Settings: Step 2 Selecting Usage Log File Upload Options On the Custom Configuration (Step 2 of 2) screen, select the method by which you want to upload usage log files: HTTPS – Supports transmission of usage log files directly from a SLM server or Collection Server to AspenTech server via secure HTTP. AUT uses HTTPS by default, but you can change it to another transfer method as appropriate. Refer to the “Transmitting Usage Log Files via HTTPS” section for further details. SFTP – Supports transmission of usage log files directly from a SLM server or Collection Server to AspenTech server via secure file transfer protocol utilizing AES 128-bit encryption. Refer to the “Transmitting Usage Log Files via SFTP” section for further details. Email – Auto-generates an email message and attaches the zipped usage log file as an email attachment and auto-transmits from a SLM server or Collection Server to AspenTech’s ALC mailbox. Refer to the “Transmitting Usage Log Files as Email Attachments” section for further details. To Collection Server – Allows you to collect usage log files by transmitting them to a shared collection server from multiple SLM servers within your organization. The collection server will then be configured to transmit usage logs from these servers to AspenTech. Refer to the Auto Upload Tool: Collection Server Configuration Settings KB Article for further details. Transmitting Usage Log Files via HTTPS On the Custom Configuration (Step 2 of 2) screen, from the Upload Method drop-down list, select HTTPS (Recommended) to configure the Auto Upload Tool to transmit your files via secure http. If you want to use this machine as a collection server, follow the steps in the “Using a Machine as a Collection Server” KB Article. If you do not want to use the machine as a collection server, clear the Use this machine as a collection server check box. Click Continue. Follow the steps in the “Testing Upload” section to validate connectivity to the AspenTech https server. If you are finished with all other configuration settings, click Finish. Transmitting Usage Log Files via SFTP On the Custom Configuration (Step 2 of 2) screen, from the Upload Method drop-down list, select SFTP to configure the Auto Upload Tool to transmit your files via secure file transfer protocol. Note: SFTP transmission requires your Port 22 to be open to the AspenTech server alcsftp1.aspentech.com. The Auto Upload Tool requires a paired private/public encryption keys ensuring data security of the usage log files. These encryption keys can be obtained from AspenTech [email protected]. Allow two business days for receipt of the encryption keys file. This file has an .acc extension. Once you have received the encryption keys, create a directory named Keys under <Program Files>\AspenTech\ALC\ and copy the .acc file to it. Next to the Import User Account Path field, click Browse to browse to the location of the encryption keys obtained from AspenTech. Make sure that port 22 is open and AspenTech SFTP server is accessible. Run the following command from a DOS window to verify this: telnet alcsftp1.aspentech.com 22 If the port is accessible, you will see the following prompt in DOS window: SSH-2.0-OpenSSH_3.8.1p1 If the above prompt does not appear, you may need to get your port 22 open to AspenTech server. Please work with your IS team to have port 22 open to alcsftp1.aspentech.com. This is an outbound connection. If you want to use this machine as a collection server, follow the steps in the “Using a Machine as a Collection Server” KB. Click Continue. Follow the steps in the “Testing Upload” section to validate connectivity to the server. If you are finished with all other configuration settings, click Finish. Transmitting Usage Log Files as Email Attachments On the Custom Configuration (Step 2 of 2) screen, from the Upload Method drop-down list, select Email to transmit log files as email attachments. This method is a good alternative when SLM servers are behind a firewall with no access to Internet. Next to SMTP Server Name, specify your email server and port. In the Email field, type the SLM server admin’s email ID (for example, [email protected]). In the User Name field, type the User Name of the SLM Server Administrator. Make sure to type valid information here. Click the Change button to specify the password. In the Password field, type the password. Note: The AspenTech email account information is preconfigured. It should point to [email protected]. If you want to use this machine as a collection server, follow the steps in the “Using a Machine as a Collection Server” KB: . Click Continue. Follow the steps in the “Testing Upload” section. Click Finish. Testing Upload On the final screen of the AUT Configuration Tool, click Test Upload. If your Auto Upload Tool configuration settings are correct, a message will appear to indicate that you have successfully configured your usage log uploads. Click Finish to close the window. Related Videos: Video: How to configure the Auto Upload Tool (AUT) Keywords: Sentinel License Manager Auto Upload Tool Aspen License Center AUT SLM ALC Usage Logs References: None
Problem Statement: This Knowledge Base article provides Explanation on how to resolve the issue of SLM manager not showing the correct token counts.
Solution: In certain cases, the total number of tokens may not be updated in SLM Manager after purchasing additional tokens and installing the new license. For instance, if the initial license token count is 10 tokens and an additional 5 tokens were purchased, the License Manager may still show only 10 tokens. To address this issue, please follow these steps: Ensure that you have applied the latest license file received from the distribution team. Make sure that only the latest license file is present in the License source directory. You can find this directory in one of the following locations: C:\Program Files\Common Files\AspenTech Shared C:\Program Files (x86)\Common Files\AspenTech Shared C:\Program Files (x86)\Common Files\SafeNet Sentinel\Sentinel RMs License Manager\WinNT Run loadls as an administrator. Press the Remove button, and then run it again as an administrator and press the Add button. Note: Administrative privileges are required to perform the aforementioned steps. Keywords: SLM, Token, License, Update References: None
Problem Statement: For equilibrium reactions in Chemistry that involve molecular species, does the reaction only happen in the liquid phase or does it also occur in the vapor phase?
Solution: The equilibrium chemistry reaction will occur in both the vapor and liquid phase. If you have vapor-liquid phases specified for streams and blocks, the chemical potentials of the molecular species are equal, and the reaction equilibrium is implicit in the vapor phase. Ionic reactions will only occur in the liquid phase since ions are only in the liquid phase. In addition, if vapor only phase is specified for blocks or streams, the equilibrium reaction will occur in the vapor phase since we still solve for chemical equilibrium using the vapor fugacities, or partial pressures if a K-STOIC is specified on that basis. Keywords: None References: None
Problem Statement: What can be done when an error like this is encountered? (
Solution: If this error is encountered while performing a recload as part of the process of forming an Aspen Enterprise IP.21 Historian (formerly the Collaborative) then make sure that the Collaborative has not already been enabled. To tell if the collaborative has been enabled please look for this registry entry: HKLM\SOFTWARE\AspenTech\InfoPlus.21\<version like 17.0 or 18.0>\group200 DWORD key COOPERATIVE_MEMBER If the value is set to 1 then it has been enabled. Change the value to 0 (zero). It does not matter if it is Decimal or Hexadecimal. Then stop and start InfoPlus.21. After the restart the ability to recload will be made available again. Keywords: from error) Loading Records from file CANNOT WRITE VALUE TO RECORD: ** No such record** LOAD ABORTED AT LINE (Text from example error above) Loading Records from File F:\Program Files\InfoPlus.21\db21\etc\Replication.rld - TSK_SUBR - IP_HOST_NAME CANNOT WRITE VALUE TO RECORD: ** No such record** LOAD ABORTED AT LINE 10 OF FILE F:\Program Files\InfoPlus.21\db21\etc\Replication.rld : = 749D References: None
Problem Statement: When trying to enter the maximum size that a fileset can grow using the Aspen InfoPlus.21 Administrator repository properties one might get this error: Data out of Range. What does this error mean?
Solution: By default, in all the versions of Aspen InfoPlus.21 the maximum size that a fileset can grow is set to 1 GB or 1024 MB or 1,073,741,824 bytes. If a user try to enter any number larger than this it will result in the error Data out of range. For all the versions prior to 2006 the maximum file set size is restricted to 1 GB. So, in order to resolve this error a user has to enter a number equivalent to 1GB or even smaller in the repository properties field. From version 2006.5 and above it is possible to change the maximum size of the fileset. There is a default maximum setting that must be changed to allow setting a file set to be bigger than 1 GB. To adjust/change the default file set size 1 Open Windows Explorer, and then go to the code directory under db21 (drive:\ProgramFiles\AspenTech\InfoPlus.21\db21\code) 2 Run chghistparams.exe utility. 3 Adjust the File Set Size (MB) number under the Max column to be larger than 1024. 4 Click OK. Now, user should be able to set a file set size to be greater than 1 GB and up to the fileset size specified in the maximum file set size field. Note: There is no need to restart the database for the file set size change to be recognized; however, the change will only take effect after the current file set is closed and a file set shift occurs. Keywords: Data out of range File set Size References: None
Problem Statement: This Knowledge Base article illustrate how to resolve IQ Analyzer status “Analyzer validation error, can’t transform PAZURAZ” in PCWS
Solution: As the messages suggest, the raw Analyzer reading is not passing the validation check. First step for troubleshooting would be to verify that the IQ Online Server is able to read the raw Analyzer reading correctly from the IO Source. You may perform a Cim-IO Test API for the tag that Raw Analyzer Value is mapped to, to verify that the communication is successful. If Test API fails, further investigation on Cim-IO communication would be required. If the Test API is successful and the Analyzer value is able to read properly, next would be to check why the value would fail Validation. Click on Engineering view | Analyzer Module | Validation menu and check if any of the current Validation settings are being violated based on the current settings. In one case, it was found that the Freeze Deadband was set incorrectly to a very small value (like 0.01), so every Analyzer reading coming in was failing this validation check and setting the value status to Bad. This also got the Analyzer status to get stuck on Initializing. The reSolution to this is either to set Freezeband to 0 (to disable it) or set it to a larger number so that the reading passes validation. In another scenario the following steps were performed to resolve the issue: 1. Saved the IQ from Manage 2. Stop and Delete (Using option remove history) from the manage 3. Recreate and reloaded the IQ and started it from Manage. That should resolve the issue and return the IQ to normal operation. Keywords: IQ, Analyzer, Status, PAZURA, Validation, PCWS, Error, stuck, initializing References: None
Problem Statement: How does the workflow from OptiPlant to ACCE works?
Solution: The improved interface makes ACCE and Aspen OptiPlant 3D export and import process easier and more intuitive. Users can pre-configure and select what piping attributes are transferred to estimate. The following steps explains about how to export Aspen OptiPlant models to Aspen Capital Cost Estimator: Launch Aspen OptiPlant 3D Layout V14 and open the layout file to be exported. Make a Copy of the ACCE Data excel file generated from ACCE and save the copy into the OptiPlant file folder. Go to the Deliverables Tab and click on the ACCE Button to Open the ACCE Interface window In the ACCE Interface window, go to the Export Pipe tab and click the “…” button to select the ACCE Spreadsheet. To select the MTO file click on the “…” button and chose the corresponding piping txt file Toggle the Equipment Wise Output Type Option. And click Export Once the ACCE file successfully generates, you may minimize OptiPlant. For this example, the new spreadsheet will be referred as OptiPlant Data Open the project in ACCE V14 and evaluate the project. Click on the OptiPlant button at the top of the ACCE window. In OptiPlant Interface window prompted, under the Import page, the click on Browse. Select the file OptiPlant Data.xlxs. Click Open. Click Import and the OK once the Spreadsheet has been imported successfully. Then click on Exit to go back into the ACCE software. Open any component and in the Options dropdown menu, select Pipe- Item Details (*). You can see data from OptiPlant is now incorporated into ACCE. In the Pipe Item Dets sheet of the generated report, you may filter by Component/Source and review what lines have been updated with new lengths and specification. Key words ACCE, MTO, Export, Equipment, Configuration Keywords: None References: https://esupport.aspentech.com/S_Article?id=000099262 https://esupport.aspentech.com/S_Article?id=000099256
Problem Statement: If you try to create a variable whose name starts with “INF…” or “inf..” on a custom block, it will not allow you to proceed, it will show “Invalid Expression” and the equation box will be highlighted in red with an exclamation mark right next to it.
Solution: Here is a screenshot of the previously described problem: The reason for this is that the algebraic library that is used within Unified GDOT Builder takes “INF” or “inf” as a literal number (abbreviation for infinite), so it is expecting an operator right next to it since you cannot start a variable name with a number. Similar to if you were to name a variable 2A it would not accept it, if you want to multiply the parameters you need to write it as 2*A. To avoid this issue and work with the code syntax, please avoid variable names that start with “Inf…”. Keywords: GDOT, Unified, Unified GDOT Builder, custom models, inf, equation References: None
Problem Statement: This video
Solution: outlines how to add a new IP.21 repository. Solution The following are general guidelines for adding a new IP.21 repository Keywords: IP.21 Repository File sets References: None
Problem Statement: This article describes how to generate a simple report that shows the Name, Value, and Timestamp of a tag using SQLplus Reporting.
Solution: Open the following link on the web browser of your preference; http://<server_name>/sqlplus. NOTE: replace <server_name> with the name of your Server. Click on New Report and choose the Section Type. Write a Section Name and click the Add button. Type the tag you´d like to get information from on the Tag Name section. Select the attribute of your choice and write a Label. Click the Add button. If you´d like to, select a couple more attributes and add them to the list. Once you finish adding attributes, click OK. Run the report using the Run button and Save it under the folder of your preference (Public or Private). You should now be able to view your report under the folder you chose. Key Words SQLplus Reporting Report Keywords: None References: None
Problem Statement: If there are spikes in the GDOT optimizer
Solution: , then the most likely cause could be due to noisy measurements, improper model configuration, or improper tuning configuration. Solution In a GDOT model for middle distillate applications, it is important to check the range specified for the side products for the crude units. The following picture shows a list of evaporation range temperature range for the diesel stream in the crude unit. The EVAP range refers to the % amount of liquid which is evaporated at this boiling range temperature. If the range is set so tight that it does not account for the evaporation range for all side products correctly, then it can cause an error in the optimizerSolution. The symptom for this problem can be observed in the T05.bal controlled variables spiking in DR and the T05 for the side products in DR are compensating for this error. The remedy is to navigate to Aspen Unified and fix the EVAP temperature range (widen the range) to accommodate the boiling range temperature for all side draws and re-launch the application with the new model. Keywords: spike, unified, gdot, bal, cv,dr, temp, evap, References: None
Problem Statement: This
Solution: frames to about the maximum number of coefficients that can be used on DMC3 Solution The number of coefficient is a user value that would be used as the sampling to the model curve. It can interpreted as the number of continuous segments that you would be use to model the curve that you want to Identify. the impact of changing this parameter would be on the reSolution of curve. Typically fast dynamics will require more Coefficients as you will require to sample shorter periods of time, slow dynamics can use less coefficients. On DMC if TTSS = number of coefficients would be mean that would use 1 coefficients per Minute. in something as TTSS=30 min and Coefficients = 60, would mean that you will have 2 Coefficients per minute. in DMC there is no hard limit to the number of coefficients. However, it needs to be taken into account that the execution time and memory required may increase substantially for very large controllers as more coefficients are used. The relation for the number of coefficients and the controller execution can be calculated as: NoC = TTSS / CEI NoC = Number of Coefficients TTSS = Time to Steady State CEI = Controller Execution Interval Keywords: DMC3, Number of Coefficients References: None
Problem Statement: This article describes the steps to follow to change A1PE´s language when using either Google Chrome or Internet Explorer.
Solution: Google Chrome Click the three dots on the upper right corner of the page. Select Settings from the drop-down menu and go to Languages. Click the Add Languages button, look for the language you would like to add, select it and click the Add button at the bottom. Click the three dots next to the language you added and select the Display Google Chrome in this language option. Close all tabs and re-open Google Chrome. You should now see AspenONE Process Explorer on the language of your preference. Internet Explorer Click the engine icon on the upper right corner of the page. Select the Internet Options from the drop-down menu and click the Languages button. Click the Set Language Preferences button, click the Add a Language button and look for the language you would like to add. Once you add it, go to Advanced Settings and select the language you added from the drop-down menu under the Overide for default input method section. Click Save and close this window. Click ok on the Language Preference window and Ok on the Internet Options window. Close all tabs and re-open Internet Explorer. You should now see AspenONE Process Explorer on the language of your preference. Key Words Language AspenONE Process Explorer A1PE Google Chrome Internet Explorer Keywords: None References: None
Problem Statement: After upgrading from Fidelis V12 to V14, you may encounter the error Unable to show Visual Studio Tools for Applications (VSTA) IDE. Please try repairing the Visual Studio Tools for Applications program (usually necessary if Visual Studio was installed after Fidelis). This error occurs whenever Write Key Routines is executed and tried to open VSTA, which prevents the user from writing code.
Solution: This error can occur when multiple versions of VSTA and Aspen Fidelis are installed on the same machine at once and Fidelis V12 was not uninstalled before the upgrade. To resolve this error, uninstall all VSTA and Fidelis versions, then reinstall: Search for and open Control Panel > Programs > Uninstall a Program Find Microsoft Visual Studio Tools for Applications [Year] > Right-click > Change Another dialog box will pop-up, click Uninstall and follow the prompts to uninstall Repeat for every VSTA program year that is available on the system Uninstall all Fidelis versions Go to https://visualstudio.microsoft.com/downloads/, then download and install the appropriate VSTA version based on Fidelis compatibility (VSTA 2019 and below for Fidelis V14) Reinstall Aspen Fidelis V14 If the error still occurs, find the newly install VSTA version in Control Panel > Right-click, then change > Repair Follow the prompts and the repair process should install the necessary components to launch VSTA from Fidelis properly Keywords: Fidelis write key routines error References: None
Problem Statement: The Aspen Mtell Alert Manager services are running but connection to them cannot be established. You may encounter the following message while running the MAM configuration app: “Error while trying to connect to the IIS services”. Cause During the installation, the local system account (SYSTEM) was provided on the Specify Windows services account information screen.
Solution: Obtain a Managed Service account (preferable) or a Service account to run the services. Follow these steps to reconfigure the services to use this account. Stop all Aspen Mtell Alert Manager services. (referenced in this KB article: https://esupport.aspentech.com/S_Article?id=000100666) Launch Internet Information Services (IIS). Expand the menu under the machine name. Select Application Pools. Right click APMDataProviderPool then click Advanced Settings. Click on Identity then click on the three dots that appear on the right side of the dialog box. Select Custom account and click Set… Enter a Managed Service account or a Service account with the correct password. Click OK, OK then OK again. Repeat 5-9 steps for the following application pools: DASvcAppPool FastSvcAppPool FastUpdaterSvcAppPool GatewaySvcAppPool ValidationSvcAppPool ResultSvcAppPool Reset IIS. Keywords: Mtell Alert Manager MAM error References: None
Problem Statement: An issue in past versions of Aspen Mtell could have resulted in some objects appearing only in Agent Builder or only in System Manager. A utility is provided separately from these applications to fix these issues.
Solution: On the Mtell server, follow these steps: Backup the database prior to running the utility. The utility resolves the issues by hard deleting the objects which only exist for one product, allowing you to recreate them properly in the current version. Hard deleting means the records are removed from the database, rather than simply marked deleted. Go to the directory C:\Program Files\AspenTech\Aspen Mtell\Suite\Tools\System Manager\Hierarchy Utility Double click AspenMtellHierarchyUtility.exe. Click Check Difference. This will show the objects which only exist for System Manager, and those which only exist for Agent Builder. Take note of these so you can recreate the objects which should exist. The application may display these messages: You are about to hard delete error objects. are you sure you want to continue? After you confirm, the utility will hard delete the objects. Hard Delete successfully. The hard deletions completed successfully. Error happen when deleting. Errors were encountered trying to hard delete the objects. Contact AspenTech Customer Support if you believe you still have issues of this sort. No objects difference between AgentBuilder and SystemManager. The utility did not find any issues it can fix. If you have System Manager open while performing the hard deletions, you will need to restart it to refresh the hierarchy. Note: The utility cannot fix issues where there are assets or locations that have the same name within the hierarchy. The utility will not know which one to prioritize. Keywords: Mtell hierarchy discrepancy References: None
Problem Statement: When sending an email through Mtell, you may encounter the following error: Error in processing. The server response was 5.7.3 STARTTLS is required to send mail [...]
Solution: This error indicates that the server client requires a security layer in order for it to accept email for security and verification purposes. This error is resolved by enabling SSL in the SMTP server info box: SSL is generally recommended to be enabled by default. This KB article (https://esupport.aspentech.com/S_Article?id=000099987) describes how to setup the SMTP server. Keywords: Mtell STARTTLS email error References: None
Problem Statement: During the upgrade process for Aspen ProMV, once you reach the prerequisite validation stage you may see that manual installation of third-party software including RabbitMQ is required as shown below. This article describes how to manually install a newer version of RabbitMQ if the installer is unable to complete the prerequisites on its own. For help installing Java specifically, refer to the KB article here: https://esupport.aspentech.com/S_Article?id=144801.
Solution: After following the installer instructions for removing previous versions, go to the RabbitMQ page here: https://www.rabbitmq.com/changelog.html Find the version number listed in the Aspen installer, 3.9.13 in the above example Click on Release Notes as shown below, which will take you to a GitHub page Under Assets, find the appropriate installation media for your operating system. For machines running Windows, download the file with the *.exe extension, or rabbitmq-server-3.9.13.exe for the above example Double-click the downloaded file and follow the installation wizard Once the installation is finished, go through the ProMV installation wizard again, the RabbitMQ prerequisite should no longer appear If the prerequisite for RabbitMQ has not changed, open a browser and go to localhost:15672, login using the default credentials admin for both username and password, and make sure that the RabbitMQ version displayed is correct. If the version is incorrect, make sure the correct version was downloaded and try installing it again. For additional details, refer to https://www.rabbitmq.com/download.html for the official documentation. Keywords: RabbitMQ upgrade AspenONE ProMV Prerequisite References: None
Problem Statement: When accessing and using web applications to view Mtell alerts from Alert Manager or Mtellview, you may encounter a variety of issues that occur in some or all available browser types such as Edge or Chrome. These can range from Fatal error to unable to load trend and many other errors. By generating browser debug logs, this can help to troubleshoot and diagnose the issue.
Solution: For Firefox, Edge, Chrome, press Ctrl+Shift+J, then click Console to show the browser logs. Alternatively, click the 3 dots at the top right of the browser window > More tools > Developer tools > Console An example console with errors is shown below, you can click Default levels and enable Errors only to filter the number of logs shown. This can be provided to Aspen eSupport to help troubleshoot any errors that occur during normal operations. Keywords: browser debug edge chrome References: None
Problem Statement: When trying to install/upgrade Aspen ProMV and the SQL server is hosted on the same machine, the current Windows user must have sysadmin access to the SQL server, otherwise you may encounter the following error No sysadmin access to SQL server:
Solution: To check sysadmin permissions: Open Microsoft SQL Server Management Studio > Connect to the SQL Server Database Expand Security > Server Roles > Double-click sysadmin Verify that the Windows user performing the installation is listed under the sysadmin members If the user is not listed, contact the SQL server's database administrator to request the sysadmin role. If the current user has elevated permissions for making changes in the SQL database, to add a user to the sysadmin list, click Add > Browse > Select the relevant user > Click OK > Click OK If the SQL server is located on another machine, the installation wizard should pass this screen, but will require using the ProMV service utility after installation for specifying the SQL data source. The service utility is described in this article: https://esupport.aspentech.com/S_Article?id=000100594. Keywords: ProMV SQL server sysadmin access database References: None
Problem Statement: Software License Manager (SLM) License File Installer
Solution: The aspenONE License File Installer is a utility that will automatically verify and install your network or standalone license file. ThisSolution provides the aspenONE License File Installer utility and instructions on how to use it. Starting with V11.1, the aspenONE License File installer has the ability to schedule licenses with a future start date. The “scheduled license install” feature will install the license exactly at the time when the license becomes valid, eliminating the need to manually install the license yourself. Note: If you are installing a network license file on your license server machine, please ensure you have the aspenONE Software License Manager installed. See related installation videos below. When installing a dongle-free license (Network or Standalone), it must be installed on the machine with the matching locking information that you provided during the license request. Launch the aspenONE SLM License Manager and click on License File Installer. Click on the File icon and browse to the saved location of the license file. If you have multiple license files and unsure which license file to install on the machine, then click on the blue “Folder” icon to browse a folder to find all applicable license files. It will display all the license files available in the folder. Click on a license with a Green check mark, then click Next. (If you see a license with an Orange clock, this indicates a license with a future start date, see steps below for scheduling this license file. The License File Installer will follow a series of steps and finish installing the license file. If you are scheduling a license with a future start date, select the license with an Orange clock, click Next. The License File Installer will follow a series of steps and schedule the license file. A license file installer icon will be placed on the System Tray. You can click on the icon to get more details. You will see the following dialog from the System Tray after a successful install. Related Videos: Video: How to Install a Standalone License Video: How to Install a Network License Video: How to Install the Software License Manager (SLM) Server Related Articles: How to Install a Network License File Manually How to Install a Network License File Manually Keywords: SLM Standalone License Network License Software License Manager License Installation License Install References: None
Problem Statement: How to specify Liquid, Vapor and Two-Phase Heat Transfer Coefficient in Aspen Plate Exchanger?
Solution: You may enter a value for the vapor heat transfer coefficient here to override the calculated value. Program calculated values should normally be used, unless you have a very good reason for overriding them. You can specify Heat Transfer Coefficient under Input | Program options | Thermal Analysis | Heat Transfer/Hydraulics tap. Keywords: Heat Transfer Coefficient, Aspen Plate Exchanger. References: None
Problem Statement: What is the Polymer Attribute Sheet & its application?
Solution: Polymer attribute sheet is available under the reactor’s results form – This sheet is very useful to view the properties of polymer generated locally inside a reactor. For multi-site reaction kinetics, the sheet displays the properties of polymer generated at each site in addition to the composite polymer. For other types of kinetics, the sheet displays composite local properties only. These results can be used to help confirm the accuracy of the local chain-length or molecular-weight distribution plots. The Step-Growth reaction model does not predict higher moments. When using this model, DPW, MWW, and PDI will be blank here. Step-Growth, Segment-Based, and Ionic reaction models violate assumptions in the method of instantaneous properties used to calculate molecular weight distributions. User kinetics with side reactions may also do so. As a result, the molecular weight properties reported by these models may be inaccurate. Usually they are still close unless the rates of certain reactions are high. For more details on the calculations, please refer Aspen Plus help. Keywords: Polymer Attributes, DPW, DPN, MWN, MWW, PDI, MWS References: None
Problem Statement: What are various property distribution types available for polymers?
Solution: The common polymer structural properties for which distributions are typically considered include: Chain size: molecular weight or chain length Copolymer composition (not modeled in Aspen Plus) Degree of branching Polymer particle size In order to accurately characterize a polymer component, and maintain control of polymer product properties, engineers must concern themselves with these distributions. From a modeling standpoint, many theoretical and empirical functions have been developed to represent distributions. These functions tend to fall into categories derived from their formulation, or from their graphical representation. For example, distributions that consider two dependent parameters simultaneously (for example, chain size and copolymer composition), are termed bivariate distributions. Distributions that mimic the normal bell-shaped graphical representation are called unimodal distributions. This is in contrast with distributions that reveal several peaks and are called bimodal or multimodal distributions. This figure shows examples of unimodal and bimodal distributions. For more details on the calculations, please refer Aspen Plus help. Keywords: Structural properties, distribution types, polymer References: None
Problem Statement: How do I exclude any source from calculation for the particular scenario in Aspen Flare System Analyzer (AFSA)?
Solution: To change exclude any source from calculation, select Scenarios from the Build section of the Home ribbon. Choose the scenario(s) for which you want to limit/ Ignore any source. In the Scenario Editor pop-up select the sources tab where you can Ignore the source form calculation for the particular scenario. The check boxes in the Ignored column allow you to exclude any source from calculation for the particular scenario. In this case the back pressure on the source will not be calculated. Keywords: Exclude, Source,Scenario. References: None
Problem Statement: When configuring Aspen ERP Connect, SAP settings for TSK_ERP are configured using Aspen ERP Connect Administrator (ERPConnectAdmin.exe tool found in C:\Program Files\AspenTech\InfoPlus.21\db21\code folder). This knowledge base article explains where the SAP settings configured in Aspen ERP Connect Administrator are being saved to.
Solution: The SAP settings are being saved in a file called tsk_erp.exe.config which is also located in C:\Program Files\AspenTech\InfoPlus.21\db21\code folder, the same folder as ERPConnectAdmin.exe tool. When launching ERPConnectAdmin.exe tool, the user need to have Administrator's privilege or write permission to the folder. The tool can also be launched by right-clicking on it and select Run as administrator from the context menu. Note: If TSK_ERP is already started and running in Aspen InfoPlus.21 Manager, restart the task by performing as below: Launch Aspen InfoPlus.21 Manager. Select or highlight TSK_ERP under Running Tasks section in Aspen InfoPlus.21 Manager. Click on STOP TASK button. Select or highlight TSK_ERP under Defined Tasks section in Aspen InfoPlus.21 Manager. Click on RUN TASK button. Keywords: Aspen ERP Connect TSK_ERP tsk_erp.exe.config References: None