question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: Whenever there is a new installation of Aspen Process Recipe Manager (APR) or Aspen Process Sequencer (APS), it is possible to install the Web server in a different box. If this is the case and there is no pre-IIS installation in the APR or APS server, the Web page will not connect to them. | Solution: For all Aspen Process Sequencer views other than the Graphic (custom) page, the content is generated from the ATM_WEB web site, which is installed on the Aspen Process Sequencer server. The resulting web pages are hosted within Aspen Production Control Web Server, and they provide full functionality for monitoring, operating, and managing transition Packages.
PCWS requires the folders in C:\inetpub\wwwroot\AspenTech\atm_web from the APS Server.
As a prerequisite, Install IIS and then AT recipe software. IIS needs to be installed and then the APS server will generate those atm_web files and send them to the web server. PCWS constructs a URL based on that hostname and embeds an IFRAME in the PCWS web page that displays the ATM_WEB application inside it.
Fixed in
V12 installation guide
VSTS 497654
Keywords: …Aspen Process sequencer, Aspen Process recipe, PCWS.
References: None |
Problem Statement: How do I use the Software License Manager (SLM) Commute tool? | Solution: The SLM Commute tool allows the user/client to borrow licenses from a network server. These borrowed (or commuted) licenses allow a client computer to run the licensed product while disconnected from the network for up to 30 days.
Note: You must run SLM Commute tool when connected to the License Server network to obtain and verify the licenses required.
The commuted time is specified in days, with a maximum of 30 days. The licenses can be returned prior to their expiration date. In order to successfully commute a license, the commutable feature must be activated in the license file.
To use the SLM Commute tool:
From the Start menu, select aspenONE SLM License Manager
Click on Commute to launch the commute tool
The license server that was configured in the SLM Configuration Wizard should be listed under the SLM Server(s) column.
You can view licenses by product or by server.
Click the Licenses by Products tab. You can select one or more licenses under the product.
Click the License By Server tab. You choose one or more licenses and the number of licenses you want to check out.
In the Days to check out license(s) from server field, enter the number of days you require the license(s).The number of days can be any integer from 1 through 30.
Starting in V9, you can now also select the number of tokens to commute based on the licenses chosen.
More tokens can be commuted if you plan on using more than one instance at a time. Ex: HYSYS requires 14 tokens for one instance. To open another instance at the same time you will need a total of 28 tokens
Then click Commute
If the commute is successful, SLMCommute displays the list of commuted licenses. The license or licenses are temporarily released from the server to your hard drive. You can now exit the tool and run your licensed product away from the network.
The license will expire automatically at midnight on the last day of your license period. You can return the license before the expiry date.
To return the commuted licenses:
Select the licenses you will be returning.
Click Return or Return All
Recommended Practices
Make a note of the server names or IP addresses.
Always check licenses back in when you reconnect to the network.
Take the licenses only for the period that you require.
Do not take any more licenses than you need.
To maximize the efficiency of your network licenses when commuting
Tip: If you find SLM Commute is slow, open a licensed product before running
SLM Commute.
Related Articles:
Video: How to commute a license
KeyWords:
SLM Commute, license, checkout, expire, network, server, tools, V11
Keywords: None
References: None |
Problem Statement: What should I do when the “Failed to update the pressure profile” message occurs, when trying to update pressure drops in the column using results from the Column Internals hydraulic calculations? | Solution: When Export Pressure Drop from Top/Bottom options are chosen, the column's pressure profile is updated based on the calculated pressure drops, and the pressure specification for each stage is overwritten.
When Export Pressure Drop from Top is selected, the condenser/top stage pressure is retained, and the pressures for the other stages are updated using the calculated pressure drops.
When Export Pressure Drop from Bottom is selected, the reboiler pressure is retained, and the pressures for the other stages are updated using the calculated pressure drops.
The above message occurs when the user specified column pressure profile is provided for the reboiler/condenser session. To avoid this message and be able to update the column's pressure profile based on the column internals calculated pressure drops, specify the pressure for the stages (typically the top and bottom stages) as shown below.
Keywords: Failed to update the pressure profile, Column Internals, Column Pressure Profile, Export Pressure Drop from Top, Export Pressure Drop from Bottom
References: None |
Problem Statement: How to replace bad trend information in Aspen Fleet Optimizer? | Solution: The Replace Trend Info utility allows you to replace a customer's sales trend data. Aspen Fleet Optimizer replaces these values with the software’s own calculated Expected Sales values. This utility can be used to modify the sales trends for a Customer or the existing Group of Customers within a Terminal or Zone.
Access the Replace Trend Info utility from the Aspen Fleet Optimizer main menu inside utility.
Replace Trend Info utility is commonly used during holidays when sales trends fluctuate abnormally. Replacing the actual abnormal sales with an expected sales figure ensures an accurate forecast after the holiday. The expected sales figure makes the software think that these are the actual sales, and Aspen Fleet Optimizer forecasts shipments according to these figures. This allows the customer to hold proper product inventories. Aspen Fleet Optimizer would use this information to update the sales trends and ensure a more accurate forecast in the case of special exceptions.
Keywords: None
References: None |
Problem Statement: What is the purpose of the Data Quality Manger in Aspen Fleet Optimizer? | Solution: The Data Quality Manager allows you to reconcile a customer's sales and inventory data by processing the data through a series of reconciliation tests. If Aspen Fleet Optimizer cannot reconcile a customer's newly gathered sales, inventory, and delivery information with historical information, that customer's information is flagged as an exception and is highlighted in red. Reconciliation takes place when the difference between reported and expected sales, inventory, and delivery information does not meet a specified tolerance. When this tolerance is not met, an exception occurs. When processing an exception, you must evaluate the data to find the figures that hinder a reconciliation. With irreconcilable data displayed, analyze the information and update or correct the sales, inventory, and delivery information so that reconciliation is possible. Messages at the bottom of the Data Quality Manager dialog box assist you in finding and correcting an error. Once the data is updated, the customer's shipments are automatically re-forecast and the data is reprocessed to test for reconciliation.
Keywords: None
References: None |
Problem Statement: What are the product categories used for in Aspen Fleet Optimizer? | Solution: Fuel categories are generic names for varieties of gasoline and diesel fuel. For example, fuel categories could be named Unleaded, Super, Special, Supreme-Diesel, and so forth. Fuel categories must be defined in order to identify the fuel type of each product in the system. There are no limits to the number of fuel categories you can create.
Keywords: None
References: None |
Problem Statement: The Event Forecaster allows you to plan for an upcoming event by creating an event plan based upon the historical demand data for a given day of the week to use as the forecast for the upcoming event. | Solution: For example, if Monday, March 3rd, is a bank holiday and you anticipate sales on March 3rd to be the same as any typical Sunday, you can use the historical demand data for Sundays as your demand data for March 3rd. The system then creates Holiday/Storm records in the database for each customer, which causes the demand planner to generate forecasts for each customer. This process makes sales on Monday, March 3rd, equivalent to the sales on a typical Sunday.
Keywords: None
References: None |
Problem Statement: How are clusters used in Aspen Fleet Optimizer? | Solution: In some petroleum distribution markets, split loads are routine for customers with limited storage capacity or disproportional sales patterns. A split shipment is an alternative to a short shipment. To facilitate the scheduling of partial shipments to multiple customers, you can designate clusters of customers from which Fleet Optimizer can choose to create full shipments. A Cluster is a group of customers with close geographic proximity and for whom it is beneficial to combine shipments to create a full load. Cluster setup is the function of Fleet Optimizer where all clusters are added, modified, and deleted.
A cluster is used for split shipment cases where certain customers cannot receive a full shipment within the preferred delivery window length. A cluster is designed so that customers of this nature can be paired with customers with adequate storage. With clusters, Fleet Optimizer attempts to ship a full shipment to each customer. If a full shipment cannot be accommodated, Fleet Optimizer develops a full shipment and splits the shipment quantities between cluster members to meet the inventory needs of each customer within the cluster.
There are two methods available to you for creating split shipments: Single-Terminal Best Split Candidates (BSC) and Multi-Terminal Best Split Candidates (BSC). Each method requires a different cluster model.
Single-Terminal BSC Model. A customer can be assigned to only one cluster and all of the customers in a cluster must belong to the same primary terminal.
Multi-Terminal BSC Model. A customer can be assigned to only one cluster and cluster can have customers that belong to different terminals. All customers in the cluster must belong to the same group.
Keywords: None
References: None |
Problem Statement: When the FCC model is calibrated for first time and there is no historical data available, you should select a base catalyst as starting point. What catalyst can be used from the library? | Solution: The catalyst library offers +60 catalysts. One of them is called “generic”. This should be used as a base.
To keep catalyst factor calculations simple, it is assumed that the “base” catalyst factors are either zero or one. When the catalyst is changed, the new catalyst factors will be moved from zero or one.
Some catalyst factors are not zero or one. These factors represent physical data for the catalyst. This data should be set to match the data for the catalyst.
Keywords: catalyst, calibration, base, generic
References: None |
Problem Statement: As per knowledge-base article 127329 AspenTech (very strongly) recommends using TSK_HBAK for backing up Aspen InfoPlus.21 History Filesets.
The question answered in this new article is how to restore those backed up filesets in the case of such as a catastrophic disk failure on the production system | Solution: AspenTech distributes for free a special executable/external task, as well as a database record and a few Aspen InfoPlus.21 Administrator settings, specifically designed for performing extremely safe backups of History Related files on a user-defined scheduled basis. Once the backup has been performed the user has the option to start another application which could for example copy/move the backups to some external device for safe keeping - or any other procedure or application for that matter.
The user can also choose how many of the recent copies of backups they want to keep, with the option to automatically delete the oldest copy when a new one is created.
The backup files created this way have extremely meaningful names that identify the associated history repository as well as the dates of history stored in the files. Thus making the restoration of backups very simple and reliable
This article attempts to explain how to setup and use the above described procedure called TSK_HBAK (also known as just HBAK), including how to switch from using 3rd party or system backup procedures to using TSK_HBAK.
The first thing to point out is that the external task called TSK_HBAK must be defined in the Aspen InfoPlus.21 Manager to use 'h21arcbackup.exe'.
Once configuration is complete, it needs to be a running task that gets started each time the database is restarted - by defining it as an external task as shown below.
The next decision is how many filesets for each repository you want to be backed up when you turn on TSK_HBAK for the very first time? For example you may be starting with a brand new system, you have only been saving historical data to disk for a few weeks or months. In that case it would seem to make sense to backup all online filesets containing data when starting HBAK for the very first time.
Whereas you may have literally hundreds of filesets containing multiple years of data already online. Again the choice is up to the user as to whether they want to re-backup everything currently online, or maybe just start with the current active fileset - remembering of course that if you are going to backup everything online, then you need to have plenty of available disk space to hold all of the backups !
To configure this recently discussed decision we need to understand how HBAK decides what to backup when it runs. The key to this is the setting of the Status flag for each fileset as viewed from the IP.21 Administrator with an example below.
If Status = None then there is not any data in that fileset and therefore it will not be backed up.
If Status = Mounted then there may well be data in the fileset, but HBAK will NOT back it up unless it is the active fileset (more later).
If Status = Mounted PLUS either/or both of SHIFTED or CHANGED then HBAK will back it up
Note that as Status of Shifted means that the fileset was shifted out of since the last time HBAK was executed whereas a Status of Changed means that the fileset was modified (usually due to history insertion or modification) since HBAK was last executed.
So now you should be asking, if all/most of my filesets show some combination of Shifted or Changed, but I know that I already have a backup of them and do not want them backed up by HBAK, what can I do?
The answer is a new switch added with V2006.5 that can be seen by right-clicking on the repository in the IP.21 Administrator and choosing the Backup choice. The switch is called 'Clear Backup Flags'. Clicking on that button will set the status of every fileset in that repository to 'Mounted' As above, with all filesets set to just Mounted, the only fileset that will be backed up the next time HBAK runs will be the active fileset. So, this would be the recommended procedure of avoiding backing up many older filesets that you do not want to backup when first using HBAK.
Note that using the 'ClearBackupFlags' option does not allow selected filesets to have their Status changed. The button is an all or nothing option.
Now we are ready to configure the record to tell HBAK exactly what we want to backup at what kind of schedule, as well as where to write the backups and whether any post-backup processing should be performed.
This configuration is by means of a record defined by HistoryBackupDef. Note that AspenTech distributes an example record called HistBackupDemo that could be used either as an example, or modified to meet the user requirements.
A HistoryBackupDef record has a fixed area and a repeat area, however, just before we go into more detail about the record details, a short discussion of exactly what can be backed up is needed here. Here are the four items:-
- History System Files, meaning the contents of the ...\aspentech\infoplus.21\c21\h21\dat directory
- Active Fileset, meaning the contents of the directory containing the files for the Active Fileset for each repository that is defined within the HistoryBackupDef record..
- Shifted Fileset, meaning the contents of the directory containing the files for any Fileset that has been shifted out of since the last time HBAK was run. This is indicated by the word SHIFTED as part of the Fileset Status as seen in the IP.21 Administrator. This would be done for each repository that is defined within the HistoryBackupDef record.. Note that the word SHIFTED will be removed from the Status when the backup has been complete so that the Fileset will not be backed up again unless some other Change is made to it in the future.
- Changed Fileset, meaning the contents of the directory containing the files for any Fileset (apart from the Active one) that has been Changed since the last time HBAK was run. Changing would only happen if history is inserted into an old Fileset or a historical value is changed. This is indicated by the word CHANGED as part of the Fileset Status as seen in the IP.21 Administrator. This would be done for each repository that is defined within the HistoryBackupDef record.. Note that the word CHANGED will be removed from the Status when the backup has been complete so that the Fileset will not be backed up again unless some other Change is made to it in the future.
AspenTech STRONGLY recommends that all 4 items are backed up every time HBAK is executed - see below for record details.
The fixed area of a HistoryBackupDef record is shown below. Most of the fields such as the ReScheduling fields and Save Now are pretty obvious but more details could be seen by browsing the InfoPlus.21 Online help as seen from the Aspen InfoPlus.21 Administrator or as seen from the Aspen InfoPlus.21 Manager.
The rest of the fixed area is specifically related to saving the History System Files as described above. Three fields need a little bit more explanation..
- SAVE LOCATION defines where the backups of the HistorySytemFiles will be stored. Each backup will have a unique name indicating the date/time that the backup was performed.
- POST BACKUP COMMAND as per the example below points to a combination of a distributed bat file, along with the location defined in Save Location above, followed by a numeric. This all means that once the backup has been performed, the bat file will be executed. The bat file will cleanup the older backups such that only the number of backups defined in that numeric will remain.
- POST BACKUP RECORD allows a user to point to another database record - that would be executed after the above mentioned bat file had ben executed. That record might be such as an Aspen SQLplus record that again might want to do some file manipulation such as move the files to another location or device
The repeat area NUMBER OF REPOS defines the other 3 items that can be backed up, with one repeat area occurrence for each History Repository that is being backed up via this record.
At first glance each occurrence looks complicated with many fields, but by understanding the fields in the Fixed Area, the repeat area is actually quite simple and similar.
The first field in each occurrence defines the Repository Name and therefore would be different for each occurrence.
As suggested above, the fields Save Active, Save Shifted, and Save Changed should always be set to YES
Associated with each of the saving choices is a Location field defining where to store the backups (Active Location, Shifted Location, and Changed Location).
Also associated with each choice is the option to point to a command (such as a bat file), as well as the option to point to a database record (such as a query), both of which are performed after the backup is complete. HOWEVER :-
An Active Fileset could remain Active for many days, so if backups are performed daily, it would make sense to 'cleanup' the Active backups in the same way as we 'cleanup' the HistorySystemFiles. An example of that configuration is shown in the HistBackupDemo record.
Conversely, 'cleanups' of Shifted and Changed Filesets should NOT be performed. This is because there will never be more than one Shifted backup for each Fileset. This is why cleanup bat files are not provided for the Shifted or Changed options.
The final thing to point out is that it is the responsibility of the Database Administrator to monitor disk availability in the backup directories and on a regular basis move or delete files as necessary to ensure that disk space is always available.
Keywords: History
Backups
HBAK
Tsk_Hbak
Active
Shifted
Changed
References: None |
Problem Statement:
How to change port numbers for logical device entries, in Aspen CIMIO interface manager? | Solution:
Aspen CIMIO interface manager is the application which helps to create the CIMIO logical device, on CIMIO server.
While creating a CIMIO logical device, using Aspen CIMIO interface manager, the port numbers for respective DLGP, store, scanner and forward processes are selected automatically, by the interface manager. This is as per design.
There can be requirement of changing the port numbers in many scenarios.
Example: If the CIMIO server is migrated the configuration like CIMIO logical device name, port numbers etc. in new server should be same as the old server.
In this condition, the Aspen CIMIO interface manager will create the logical device with automatically selected port numbers. The following steps can further help to change the port number manually:
a. Stop the logical device in Aspen CIMIO interface manager. The logical device can be stopped using "stop" option in right click on logical device name. Make sure that the green dot color changes to red, indicating the logical device is stopped.
b. Navigate to windows services file, available in C:\Windows\System32\drivers\etc. Open the file and edit the port numbers for store, scanner, forward and DLGP processes.
c. Save and close the file.
d. Come back to Aspen CIMIO manager interface and start the CIMIO logical device. To start, right click on the logical device name and select "start". This will turn the red dots to green, indicating the CIMIO logical device is started successfully.
Keywords:
Port numbers
CIMIO logical device
Manual update of port number
CIMIO interface manager
References: None |
Problem Statement: How does Data Quality Manger help insure accurate forecasts in Aspen Fleet Optimizer? | Solution: The Aspen Fleet Optimizer Data Quality Manager (DQM) module automatically tests the sales and inventory data for quality before planning, scheduling, and dispatching activities. The DQM verifies that sales and inventory information entered into Fleet Optimizer is within a pre-defined set of tolerances. Quality testing involves a reconciliation process to ensure that data for the current day is consistent with data from the previous day. If the imported information reconciles, Fleet Optimizer automatically forecasts shipments for the customer.
Keywords: None
References: None |
Problem Statement: What are the results of Aspen Fleet Optimizer optimization? | Solution: After optimization, a dispatch plan displays showing each on-duty transport and its corresponding workload. This low-cost-per-volume-delivered dispatch plan represents maximum available transport utilization, minimum total variable costs, overflow expense, as well as the most cost-effective transport dispatch possibility. After a shift is optimized, the dispatcher can view the cost of the dispatch plan. The dispatcher is free to manipulate the optimization in any way and, when acceptable, can “lock” the dispatch to prevent further modifications. After the optimization is locked, the dispatcher can print driver schedules, optimization results, and shift expense reports.
Keywords: None
References: None |
Problem Statement: What does the warning 1913 about risk of phase separation mean? | Solution: “Phase Separation" is a flow maldistribution phenomena generally associated with condensation inside the tubes of a multiple tube pass heat exchanger unit.
With this warning, it indicates that there is a potential risk of tube side stream phase separation, where the vapor phase will stay on top of the rear head while the liquid phase will stay at the bottom of the rear head.
The overall mixture composition entering the condensing tubes of the second and subsequent passes can become rich in the more volatile components.
This will lead to a situation where temperature, flow pattern, velocity, physical properties (and composition if the stream is a mixture) are different from the top tube row to the bottom tube row. Aspen Shell and Tube Exchanger cannot simulate this situation. Instead, it will assume the above parameters are uniform along different tube row per pass.
More details can be found in HTFS Research Network AP08 chapter.
To avoid phase separation, the most common action is to use U-bend as the rear head type.
Keywords: Phase, separation, risk, tubes
References: None |
Problem Statement: Which of the reactor temperature bias variables corresponds to in the catalytic reformer? | Solution: In catalytic reformer units, the feed temperature to the train of reactors is normally known or can be easily manipulated by means of a pre-heater (fired heater).
The inlet temperature for each reactor can be entered in the Reactor Control Page under the Operations tab.
If the inlet temperature for each reactor is not known, it can be calculated based on other parameters.
To calculate based upon the delta inlet temperature, a base temperature is used as a reference temperature for biasing the individual reactor inlet temperatures.
where
Reactor(i) Inlet Temperature = Reactor Inlet
Keywords: Temperature, reactor, catalytic reformer, bias, reference
References: Temperature + Reactor(i) Temperature Bias
The temperature bias is a tuning parameter that allows the user more flexibility to tune the product stream's temperatures. It will only affect the inlet temperatures of each reactor. However, the inlet temperature will have an effect on the performance of the reformer.
In the example below, the inlet and reference temperatures are calculated based on the temperature bias and WAIT (weighted average inlet temperature).
WAIT = Tr1 x CWr1 + Tr2 x CWr2 + Tr3 x CWr3 / CWr1 + CWr2 + CWr3
Where
Tr(i) = inlet temperature of each reactor
CWr(i) = catalyst distribution of each reactor
The catalyst distribution of the reactors is calculated on the catalyst loading page in the design section of the reformer.
Tr1 = Tr ref + Tr1 bias
Tr2 = Tr ref + Tr2 bias
Tr3 = Tr ref + Tr3 bias
Tr(i) = temperature bias of each reactor
Then,
505 = ((Tr ref + 0) x 0.1444 + (Tr ref + 1) x 0.2556 + (Tr ref + 2) x 0.6) /1
Tr ref = 503.5 C
Tr1 = 503.5 + 0 Ã Tr1 = 503.5 C
Tr2 = 503.5 + 1Ã Tr2 = 504.5 C
Tr3 = 503.5 + 2 Ã Tr3 = 505.5 C
The inlet temperature is then used to calculate the WABT (weighted average bed temperature)
WABT = AVTr1 x CWr1 + AVTr2 x CWr2 + AVTr3 x CWr3 / CWr1 + CWr2 + CWr3
Where
AVTri = the average bed temperature
AVTri =Â Inlet temperature reactor i + Outlet temperature reactor i / 2
The inlet and outlet temperature values for each reactor are reported in the Results tab / Reactors
Then,
AVTr1 = (503.5 + 370) / 2 Ã AVTr1 = 436.79 C
AVTr2 = (504.5 + 403.8) / 2 Ã AVTr2 = 454.17 C
AVTr3 = (505.5 + 426.9) / 2 Ã AVTr3 = 466.23 C
WABT = (436.79 x 0.1444 + 454.17 x 0.2556 + 466.23 x 0.6) / 1
WABT = 458.9 C |
Problem Statement: What does the constraint column from the original optimizer monitor mean? | Solution: The value in the optimizer's constraint column represents the value of the constraint's constraint function, c, where c = penalty * (limit - current value) as shown below:
Keywords: optimizer, constraint
References: None |
Problem Statement: Can Aspen Fleet Optimizer handle trips that are over twenty-four hours in length? | Solution: Aspen Fleet Optimizer has added new functionality in Version 10 to handle trip times that exceed twenty-four hours in length. The system will now allow for a truck to be turned on for a total of 500 hours at a time. This amounts to the system being able to handle trip times that are roughly 20 days. The system will then optimize these long-haul loads on to the truck automatically. It is recommended that long haul trucks have a specific truck type designation to insure the truck is not load with many short trip loads.
Keywords: None
References: None |
Problem Statement: What do the different credit statuses of Suspend, Review and Credit Hold do in Aspen Fleet Optimizer? | Solution: Suspend Defers customer credit for all orders by setting the Trigger Date to the same date as the delivery date. Credit Hold Suspends the delivery of all orders for a customer until they are subsequently released. To access a list of all customers with Credit Hold status, print the Customer Hold report. Review Indicates a potential credit problem with a customer. This status does not, however, stop deliveries to a customer.
Keywords: None
References: None |
Problem Statement: The Slug Analysis tool has a column that doesn’t have any heading, but it shows a Min, Max or – next to the frequency column. What does the Min, Max or - mean? | Solution: For each position along the length of the pipe, the slug analysis calculates values for the various slug properties (lengths, velocity, holdup, pressure gradient) over a range of possible slug frequencies.
"Min" means that slug flow can occur at the given position, but the frequency calculated by the Hill & Wood correlation (or the user specified frequency, if using that option) is less than the smallest frequency for which slug properties were calculated.
There is a corresponding "Max" marker for the case where the estimated slug frequency is greater than the largest frequency for which slug properties were calculated.
To better understand this, it may help to click on the "View Cell Plot" button and clicking on various rows of the Slug Tool Results table.
For positions where "Min" is indicated, the operating point on the plot will correspond with the smallest frequency on the plot.
For positions where slug flow occurs and "-" is displayed, the operating point on the plot will be somewhere in the middle.
Keywords: slug analysis, pipe
References: None |
Problem Statement: What does the warning 1387 about the kettle entrainment fraction mean? | Solution: The warning 1387 appears when the entrainment fraction for the kettle is greater than 0.02.
In general, large kettle diameters allow the liquid droplets more settling time and hence the entrainment will be less.
If the entrainment is high, then it is possible that only one vapour outlet nozzle has been set.
To remove Warning 1387, you can increase the number of vapour outlet nozzles and the diameter.
Keywords: 1387, warning, entrainment, vpour nozzle, kettle
References: None |
Problem Statement: In the Naphtha reformer model for a semi-regen reactor type, the user can enter the midpoint weighting factors for the reactors in the catalyst page located in the Reactor tab. What are the midpoint weighting factors of the reactors and what values should you enter? | Solution: The midpoint weighting factor default value is 0.67, but it can be changed if the user wants. The value should always be between 0 and 1. This value should be the same in both calibration and simulation.
The reformer model considers the Start-Of-Run (SOR) and End-Of-Run (EOR) conditions as applied to the catalyst life. The start point tells the coke on catalyst at the beginning and the end point tells the coke on catalyst at the end, but it is the coke on catalyst at the midpoint that is actually affecting the yields. The default value is 0.67 because coke builds up increasingly, so it was determined that 2/3 was about the time of the midpoint for coke formation.
This can be changed by the user, but it may be best to not change it by too much. By setting the midpoint to 1, that means that the whole simulation is running at the end point conditions.
Keywords: midpoint weighting factor, reformer, coke on catalyst
References: None |
Problem Statement: How do I know if I am using the latest version of Aspen Assay Converter? | Solution: The Aspen Assay Converter version is independent from the Aspen HYSYS version.
The Aspen Assay Converter file version can be checked by following the steps below.
Go to the folder: C:\Program Files (x86)\AspenTech\Assay Converter
Locate the file AFAMAssayConverter.exe
Right click and enter in the Properties.
Open the Details tab and see the file version.
The first version of Aspen Assay Converter was released in 11th February 2014.
The file version associated with this release is 1.0.0.0
A patch was released on 14th November 2014 to fix the CQ00539572 - Unexpected Vapor Phase for the heavy cuts at 15 C. This fix has been included in the Hysys Cumulative Patches.
The file version associated with this release is 2.0.0.2
The latest patch was released on July 2016 to fix property management issues. This is available in AspenONE exchange in Aspen HYSYS.
The file version associated with this release is 2.8.1.3
Keywords: assay converter, version, patch
References: None |
Problem Statement: What does a target value of 0 mean in the Air Demand Analyzer? | Solution: The Air Demand Analyzer (ADA) target is usually the last condenser vapour outlet stream in the catalytic section. When we select Air Demand % target equal to 0, it means that we are obtaining the H2S:SO2 of 2:1 in reference stream (last condenser vapour outlet stream).
You can see this value on the Sulfur Recovery page of the stream.
The Air Demand of -4% in the furnace performance summary would be the amount of air (O2) required to have a stoichiometric ratio of H2S:SO2 of 2:1 in the furnace effluent stream.
In general, it is up to engineering judgement in terms of which stream to attach the ADA and target a 2:1 ratio to facilitate the elemental sulphur formation. It is typical to target the last condenser vapor outlet stream in the catalytic section, but an engineer may decide to target the vapor outlet stream of a condenser for Selective Oxidation Converter or may decide to target the vapour outlet stream of an upstream catalytic stage or the effluent of the furnace. For this reason, we report the air demand on many of the key Sulsim operations, even though you may be only targeting the air demand at a certain point with the ADA. This information is still helpful to the users to quickly understand the stoichiometric situation of H2S:SO2 at different stages in the process.
Keywords: ADA, air demand analyzer, target
References: None |
Problem Statement: In the pipe segment under Design / Parameters, there are several correlations available to model vertical pipe flow. There is, however, not enough information about the correlations recommended for downward flow. | Solution: For downward flow, the recommended correlations are:
· Beggs and Brill: A study by Payne and Palmer improved the hold-up calculation for inclined flows. This is included in the Aspen HYSYS model.
· OLGAS: Has been validated with a lot of experimental/production data
· Tulsa Unified Model: Has fairly limited validation. It is recommended as an improvement to Beggs and Brill. Should be good when slugging anticipated
· ProFES: Uses the HTFS model.
Keywords: downward flow, pipe correlations
References: None |
Problem Statement: ]
If rated flow for pipes is selected what flowrate is used for other units? | Solution: This specific option "Use rated flow for downstream nodes for tailpipe" means that Flarenet will use the rated flow instead of mass flow for tailpipes in the pressure drop correlation of the downstream node only. The rest of the calculations will still use the mass flow.
There are also options available for the rated flow to be used for the nodes (Tee's and connectors) and inlet pipes (see screen shot)
Keywords: , rated flow for pipes
References: None |
Problem Statement: The following error occurs when attempting to start Aspen InfoPlus.21:
"Error 53 returned by Task Service. The Network path was not found." | Solution: This error is related to a networking problem. This error is most often seen when Aspen InfoPlus.21 has been installed on a laptop. However, if you do get this error on a server, check the following:
1. Examine the Windows Task Manager to verify that the following processes are running:
? IP21serv.exe
? Tsk_server.exe
? h21task.exe
? h21archive.exe
? infoplus21_api_ pd_server.exe
2. Verify that you can successfully ping the Nodename and IP Address of your computer.
3. Check to make sure you can run NSLOOKUP <host name>. For details on how to use the NSLOOKUP command see the following Microsoft article: http://support.microsoft.com/kb/200525
4. From the Windows Control Panel select Administrative Tools and open Services to verify that the following services are installed and have been started:
? Computer Browser Server
? InfoPlus.21 Task Service
? Noblenet Portmapper
? Remote Procedure Call (RPC)
? Task Scheduler
? Workstation
5. Look in Network Neighborhood to make sure you can see your computer on the network.
If Computer Browser and Server are not started within Services in the Control Panel and you cannot see your computer in the Network Neighborhood area, then you have a network problem. Your Information Technology department will need to configure and add your computer to the network and verify that the network shares used by your server are accessible.
KeyWords:
Error 53;
Network error;
System error;
Keywords: None
References: None |
Problem Statement: What does Sales Trend Spike Percentage do in Aspen Fleet Optimizer? | Solution: The Sales Trend Spike Percentage is the threshold value percentage terms at which Aspen Retail will consider a short-term fluctuation in sales and in actual trend. The Sales Trend Spike Percentage' takes effect only after two days of consistent fluctuations.
Keywords: None
References: None |
Problem Statement: Why do I get a message indicating the HYSYS model was created in a newer version while importing an Aspen HYSYS model into Aspen Flare System Analyzer? | Solution: This problem can appear on machines that have multiple versions of Aspen Flare System Analyzer (AFSA) and Aspen HYSYS installed. The source of the problem lies in the way AFSA reads information from Aspen HYSYS. In order to read this information, AFSA needs to briefly launch and run the simulation from inside Aspen HYSYS.
When AFSA tries to open HYSYS, the program will attempt to open the default version registered on the user’s machine. If this default version is older compared with the version of interest, it won’t be able to load the HYSYS file correctly due to the non-backwards compatibility of the software.
In order to change the default version of Aspen HYSYS to the same as the one where the file was saved go to Windows Start | All Programs | AspenTech | Process Modeling VX.X | Aspen HYSYS | Set Version - Aspen HYSYS VX.X. From the displayed window select the version of interest. This application must be run with administrator rights to correctly configure the default version.
For example, if you are using HYSYS V9, the Set Version application can be found in Windows Start | All Programs | AspenTech | Aspen HYSYS | Set Version - Aspen HYSYS V9.
Keywords: Import, HYSYS, Set Version, Newer Version
References: None |
Problem Statement: What are the major benefits of Aspen Fleet Optimizer? | Solution: Controlling supply chain failures, such as retain (product does not fit into tanks at the customer site) and runout (customer has no inventory of a given product) conditions is one of the greatest benefits of the Fleet Optimizer. In addition, since products at a customer location sell at different rates, delivering the right mix of product is essential. If too much slow-selling product is delivered, its inventory will build. With Fleet Optimizer, you can better manage inventory positions at customer sites to avoid this issue.
Keywords: None
References: None |
Problem Statement: The TS Cond. Coef. (Tube Side Condensation Coefficient) and TS Film Coef. (Tube Side Coefficient) are shown on the Interval Analysis page under Results / Calculation Details / Analysis along Tubes. What is the meaning of these values? | Solution: The TS Film Coefficient is the total condensation coefficient considering the multi-component effect. It is calculated using the equation below:
TS Film Coeff. = Heat flux / (TS Fouling temperature – TS Bulk temperature)
The TS Cond. Coefficient is the pure component TS condensation coefficient.
The correlations used to calculate the condensation heat transfer model are selected on the Condensation page under Inputs / Program Options / Methods / Correlations.
The Silver method is among the most reliable and well-tested method to capture the multi-component effect when no information about the component is provided.
The heat transfer coefficients (TS Cond. Coefficients) are lower with the HTFS-Mass transfer method because it is taking into account the additional resistance of mass transfer (diffusional effects) in the coefficient.
Keywords: Condensation, film, coefficient
References: None |
Problem Statement: How can I select the most recent historical recording for a tag in a query? | Solution: If your tag is defined against IP_AnalogDef or IP_DiscreteDef, then you can write a query similar to:
select ip_value_time, ip_value from tagname;
This works because updates to ip_value_time triggers writes to history.
Another way is to use a query similar to:
select ip_trend_time, ip_trend_value from tagname where occnum = 1.
occnum = n selects the nth most recent occurrence from any repeat area, regardless of whether the repeat area is a normal repeat area or a historical repeat area.
Keywords: None
References: None |
Problem Statement: When configuring source extraction servers in the Aspen Production Record Manager Extractor Extraction Server Properties dialog, there is a need to enter the connection string details for the source server, including the SQL Server password. However, if the password is more than 12 characters long, an error (see below) is thrown and the password cannot be stored.
The reason is that the algorithm used to encrypt the password results in the encrypted password string being 4 times as many characters as the original clear text password. The results of the encryption are stored to the table column "password" in the AspenExtractor Database table "dbo.ExtractorServers" which is configured as varchar(50). Hence, if the password has 13 characters, it results in a string 52 characters long after encryption, and this cannot be stored in the table.
Error using SQL: insert into ExtractorServers ( ... ) value ( ... ) . String or binary data would be truncated. | Solution: To work around the issue, update the table column "password" in the AspenExtractor Database table "dbo.ExtractorServers" to varchar(200).
The issue resulted from using a more secure encryption method that was introduced to the product and the problem will be addressed in a future release of the Aspen Production Record Manager Extractor.
Keywords: None
References: None |
Problem Statement: What kind of customers are supported in Aspen Fleet Optimizer? | Solution: Fleet Optimizer allows you to set up both inventory-managed (forecasted) customers and order-entry (manual) customers. For the forecasted customers, Fleet Optimizer generates shipments based on sales and inventory data. For the manual customers, Fleet Optimizer relies on these customers to request deliveries when they are needed. An element of planning is involved in the process regardless of the customer type.
Keywords: None
References: None |
Problem Statement: When using the compressible gas pipe unit operation with the "compressible gas" method, the mass density reported on the Performance tab > View Profile is incorrect. If we change the method to "perfect gas", the mass density gives a correct value. Why is that? | Solution: This happens due to the flash calculation of the inlet stream. The "perfect gas" method always uses the ideal gas equation for density, so the calculation is not affected by the flash result of the inlet stream. On the other hand, the "compressible gas" method uses the flash results of the inlet pipe to calculate the density.
The compressible gas pipe operation performs a Pressure - Temperature (P-T) flash. If the inlet stream is calculated with a Pressure - Vapour fraction (P-Vf) flash, then the P-T flash for the same composition can give different results of vapour fraction. This is mainly when the inlet stream is close to a phase boundary.
For example, when the pressure and vapour fraction of 1 is specified in the inlet stream and the stream is close to a phase boundary, the P-T flash of the same stream can give a result of vapour fraction of 0 (see below).
The solution is to make sure that the flash calculation for the inlet stream is P-T. Hence, the user needs to specify pressure and temperature to define the stream.
Keywords: Compressible gas pipe, density, compressible gas method
References: None |
Problem Statement: The tube pitch is highlighted in red when I opened a file in V9.0, but it didn't show the red colour in V8.8. | Solution: When longitudinal fin tubes are used, the default tube pitch is 1.25*(TubeOD + 2*Longitudinal fin height).
V9.0 shows the correct calculation.
Keywords: tube pitch, longitudinal fin tubes
References: None |
Problem Statement: ]
If rated flow for pipes is selected what flowrate is used for other units? | Solution: This specific option "Use rated flow for downstream nodes for tailpipe" means that Flarenet will use the rated flow instead of mass flow for tailpipes in the pressure drop correlation of the downstream node only. The rest of the calculations will still use the mass flow.
There are also options available for the rated flow to be used for the nodes (Tee's and connectors) and inlet pipes (see screen shot)
Keywords: , rated flow for pipes
References: None |
Problem Statement: I'm dealing with a centrifugal compressor with multiple IGV curves and the Help states that off-design corrections for centrifugal compressor should be available when multiple IGV are selected, but we cannot enable these corrections. | Solution: To use the off-design correction option, we need to provide curves at different speeds for each IGV setting.
For example the “IGV_0” curve that is displayed in the screen shot is for speed 4770 rpm. Therefore to use the off-design option, we need to provide another curve at this IGV value but for a different speed.
The program will generate new sets of curves at the corrected speed and flow rates and in order to do that program needs the curves at another speed to interpolate/extrapolate for the corrected speed.
This is the reason it greys out the option when we do not have multiple speed curve.
Keywords: ,
Centrifugal compressor, off-design correction, multiple IGV
References: None |
Problem Statement: Is the ball valve fitting selected for the pipe a full bore or a reduced bore? | Solution: In the pipe’s fittings tab, the ball valve can be selected and in the Fittings DataBase Editor the loss coefficient parameters (A and B are as shown in the screen shot).
The definition of full ball valve or reduced ball valve is too vague. Different manufacturers might have different hydraulic performance data for them. In Aspen Flarenet users can define their own fittings, by adding new fittings with A and B values into the databank, and later on these fittings can be used within pipes. You should obtain the hydraulic parameters like A and B parameters for the loss coefficient for the ball valves you want to use, from the valve manufacturers.
The reference displayed in the fittings database refers to Crane page A-28 as show in the screen shot below.
Keywords: , pipe fitting, Ball valve, full bore, reduced bore
References: None |
Problem Statement: At times while using the Expert System you may receive the error "Error updating reconciliation list table! Another user may have made changes". This error occurs when two users attempt to Reconcile data at the same time
This KB provides SQL queries that shall help you identify the user/users who tried to modify the Reconciliation data | Solution: The DLORECON table in the Advisor database, contains the data for the user that modified the last reconciliation record.
In order to find out which user modified the last Reconciliation record, follow the steps below:
Login to DBTOOLS and open the model with a SUPERUSER account or an account with Superuser privileges.
Run the following query
Select IND2USER, TIME_ENTER from DLORECON where TIME_ENTER >= '2016-11-09 00:00:00';
Change the TIME_ENTER column in the WHERE clause to reflect the date that you are trying to query for.
This is the table structure for the DLORECON table. You may choose to include any columns needed.
Once you have obtained the IND2USER id from the query above, run the following query to find out who the user exactly is.
Select TAG, DESCRIPT from OLOUSERS where DBINDEX = 10002;
Here '10002' is the IND2USER id obtained from the query above.
Other applicable columns can be found in the table structure as shown below
Keywords: Expert System
Reconciliation
References: None |
Problem Statement: How can I solve the following error:
FATAL> 'T - 2' CURRENCY INPUT USED, ALL INPUT FIELDS MUST BE SPECIFIED | Solution: This error refers to the General Project Data form, specifically the lines defining project base currency:
Please note that the conversion factor in the screenshot above is in light blue, which means is a default value, however in some circumstances the value might not be correctly picked up by the calculation engine and the error is generated. This can be misleading. In order to solve the problem just renter the conversion factor, so it shows a black font.
Keywords: ACCE, fatal error, T-2
References: None |
Problem Statement: Tags configured using Q records for Aspen Real-Time Statistical Process Control enter into an ALARM state when any one of the ALARM rules specified in the Q_NUMBER_OF_ALARMS repeat area is violated.
This KB provides a method to track the status of the SPC tag. | Solution: When an SPC Tag enters into an ALARM state, a field within the Q_NUMBER_OF_ALARMS repeat area, Q_ALARM_STATE goes from 'OK' to 'ALARM'
Q definition records also consist of a fixed area field Q_ALARM_CONDITION_DV.
This field allows a user to specify a database field for an Integer tag. When the SPC tag goes into an ALARM state, the database field value shall be set to 1.
For Example,
Q_LabATCAI is an SPC tag configured using the Q_XbarCDef definition record.
The Q_ALARM_CONDITION_DV field for Q_LabATCAI points to the IP_INPUT_VALUE field for a DiscreteDef tag
When the Q_ALARM_STATE field for any one of the Alarm rules goes into an 'ALARM' state, as seen in the screen capture below,
The IP_INPUT_VALUE field changes to a '1'
Note: The value remains as '1' even after the SPC tag exits the ALARM state and goes back to an 'OK' state.
The IP_DiscreteDef record can historize the integer value of 1 and this data can then be used for further analysis
Keywords: SPC
Alarm
References: None |
Problem Statement: In the RadFrac Profile folder under the TPFQ tab, what is the difference between the results displayed for “Liquid From”, “Liquid Flow”,”Liquid Product” and “Vapor From”, “Vapor Flow”,”Vapor Product”? | Solution: “Liquid from” is the flow rate of the liquid stream immediately upon leaving the stage, including any liquid draw and pumparound from the stage. For the last stage, the Liquid from flow rate is the same as that of the bottom liquid product stream.
“Liquid flow” is the stage liquid flow. Normally, this is the same as Liquid from except that when the column is using a 2-phase algorithm, liquid draws are not included in Liquid flow. If a thermosiphon reboiler is present, the reported value includes the reboiler circulation rate
“Liquid product” is the flow rate of the liquid product from the column
“Vapor from” is the flow rate of the vapor stream immediately upon leaving the stage. It includes any vapor draw and pump-around from the stage. For the first stage, the Vapor from flow rate is the same as that of the vapor product stream on the first stag
“Vapor flow” is the stage vapor flow. Normally, this is the same as Vapor from except that when the column is using a 2-phase algorithm, vapor draws are not included in Vapor flow.
“Vapor product” is the flow rate of the vapor product
Keywords: RadFrac, “Liquid from”, “Liquid flow”, “Liquid product”, “Vapor from”, “Vapor flow”, “Vapor product”
References: None |
Problem Statement: Example on how to quickly import your company's MTO's into ACCE by using an Excel VBA. | Solution: This solution is a proof of concept on how to automate the process of entering your company's MTO's into ACCE.
The attached example code illustrates how to parse your piping MTO's and automatically populate the import/export spreadsheet template.
Keywords: Export/Import feature, VBA automation, MTO data.
References: None |
Problem Statement: How to resolve the error " No default unit of measure for Volume Liquid" when performing an Expert Reconciliation using the Aspen Operations Reconciliation and Accounting(AORA) application | Solution: The “No default unit of measure for volume liquid!" message appears in the AORA model on running an Expert Reconciliation if you do not have a Unit of Measure for the specific UOM type with a conversion factor of 1.
For Example,
In the model below, there are two Units of Measure for Volume Liquid, Cubic Meters and Liters.
However neither of the Units of Measure have a Conversion Factor of 1.
When I attempt to load the Expert Engine in such cases, we get the following error message:
To correct this error, set the Conversion Factor as 1 for atleast one of the Units of Measure's.
The Unit of Measure you elect to configure with the Conversion Factor = 1 needs to be the UOM that you want to the be the “BASE UOM” for Mass or Volume Reconciliations. Those “Base UOMS” for Mass and Gas / Liquid Volume Quantities as well as for your Properties like Density, Temperature, Pressure, etc. come Pre-Configured based on the Model Basis selected when creating your AORA Model. Customers who decide to modify those default UOM Configurations are then responsible for making sure all other related UOM and OLOCONFG Tables are updated to maintain consistency for the intended Model Basis.
Keywords: UOM
Conversion Factor
References: None |
Problem Statement: This solution provides a sample code to read data from an Excel spreadsheet into a Process Explorer trend or a graphic object using Visual Basic. | Solution: Methods to access Excel Functions are available with the Microsoft Excel Object Library. In order to access this library, Excel needs to be installed locally on the system and “Microsoft Excel 15.0 Object Library” needs to be added to the VBA Form. To do so, open Tools ->
Keywords: Visual Basic
Process Explorer
Excel
References: s, locate “Microsoft Excel 15.0 Object Library” and click on Add.
Note: The Excel Library available depends on the Excel Version available on your system. This could be Microsoft Excel 14.0 Object Library or Microsoft Excel 12.0 Object Library depending on the version of Excel available.
Once this library is added, you should have access to all methods and functions available within.
Using the Workbook object you should be able to read data into the Process Explorer trend or Graphic
Please go through the information available on MSDN https://msdn.microsoft.com/en-us/library/office/ff823078.aspx for all methods and objects available within the Excel Object Library
Example:The following example is for a Process Explorer graphic containing a datafield object. The Datafield object is fetching data from a local Excel Spreadsheet.
Option Explicit
Dim EP As Excel.Application
Dim WB As Excel.Workbooks
Dim WB1 As Excel.Workbook
Dim Wks As Excel.Worksheet
Sub Main()
Set EP = CreateObject("Excel.Application")
Set WB1 = EP.Workbooks.Open("C:\WorkinfFolder\Test")
Set Wks = WB1.Worksheets("Sheet1")
TextBox1.Text = Wks.Cells(4, 3)
EP.Application.Quit
End Sub |
Problem Statement: All MES client applications authenticate users against the Local Security server using the security URL entered in AFW Tools.
For Example.
http://<ServerName>/AspenTech/AFW/Security/pfwauthz.aspx
The URL is ideally expected to return a blank page. However under certain conditions the URL may return the following error:
"Unrecognizable Attribute 'requestvalidationMode'. Note that attribute is case-sensitive"
This KB provides a workaround for this error. | Solution: If the URL cannot return a blank page on the client, then most likely the same behavior shall be seen on the server. To resolve the issue, check the following on the Aspen Local Security Server:
Ensure that the .NET Framework 4.0 is installed on the Local Security server. Also ensure that the feature is enabled within Windows Roles and Features
Open IIS -> Click on Application Pool ->Check the Aspen Security Pool.
The Aspen Security Pool must use .NET V4.0.Change the settings for the Aspen Security Pool to V4.0 in IIS.
Click on the Server Name in IIS Manager. On the right hand side click on "ISAPI and CGI restrictions".
Ensure that both 32 bit and 64 bit versions of ASP.NET v4.0.30319 (aspnet_isapi.dll) are allowed.
Once these changes are made, please restart IIS service.
Keywords: AFW
Security URL
requestvalidationmode
References: None |
Problem Statement: This knowledge base solution explains the procedure to work with .csv files in SQL queries while using Aspen SQLplus from a client machine | Solution: When using Aspen SQLplus from a client machine, the client instance shall attempt to communicate with the Aspen InfoPlus.21 (IP.21) Server system where the TSK_SQL_SERVER task is running.
This can be found out by opening Aspen SQLplus Query Host
For example, in this case the host is DEEPIKAVM003
The Aspen SQLplus application instance on the client machine will automatically look for the .CSV file on the Host system.Therefore, the .CSV file needs to be physically located on the IP.21 system.
The user can then reference it in the query using a file share
Keywords: CSV
Query
References: None |
Problem Statement: Aspen Overall Equipment Effectiveness (OEE) measures how effectively manufacturing assets are being utilized. The OEE data is stored in OEE tags defined by the OEEDef definition record.
This KB article explains how to properly configure OEE tags so the users are able to populate them with event data. The KB can help resolve the error message listed in the | Solution: The document titled Steps to configure an OEE tag to accept data describes the steps users should follow to properly configure an OEE tag to receive event data.
Please click on the link in the Attachments section below to download and review this document.
Keywords: section below.
References: None |
Problem Statement: User input pass lane widths are not used, Why?
The horizontal and vertical pass partition lanes have been specified to 18.95 mm,
but the mechanical results summary show different values (24.94 mm).why? | Solution: With the cleaning lane or tube alignment option set to aligned for all layouts, the bundle input specifications will be all automatically reset.
In the mechanical summary sheet, different values for all the pass lane widths are reported and furthermore the top, bottom, left and right open distances are calculated differently.
Usually, for 30/60 degrees we don't need to align tubes as it is not easy to mechanically clean those.
With the cleaning lane or tube alignment option set to non-aligned the program will conform to the input pass partition lanes values and all open distance values will be calculated by the program and the top and bottom, left and right distance will be divided equally.
However, if user input values for the open distances need to be also observed, then the work around is to go to "bundle layout" sheet, under layout parameters and change the tube layout option to "use existing layout".
Once the "Tube Layout" tab is accessible then we can modify the tube arrangement from by right clicking on the tube (options such as delete or add tube will be available).
Keywords: , Cleaning Lane, Tube Alignment, pass partition lanes
References: None |
Problem Statement: Is it possible to call a VBA macro when an Aspen Plus Excel calculator block executes? | Solution: This is doable by implementing an event handler procedure in Excel.
An "event" in Excel describes an occurrence, such as opening an Excel file, selecting a sheet, changing a value in a cell, etc. Events can be triggered by user input or by external links and applications that interface with Excel.
An "event handler" is a reserved procedure in Excel that always runs when the corresponding event occurs. Predefined "events" and their corresponding "event handlers" are hard-coded into the Excel application.
When the solver executes an Aspen Plus Excel calculator, it writes Import variables to linked cells on an Excel sheet. This will trigger the Worksheet.Change event in Excel.
https://msdn.microsoft.com/en-us/library/office/ff839775.aspx
The reserved procedure that handles this event is:
Private Sub Worksheet_Change(ByVal Target As Range)
End Sub
In order to implement this event handler correctly, the code must be written under the sheet that contains the import variable(s).
From this procedure, a call can be made to another procedure that contains the code that you would like to run with the calculator. The cell that was changed is passed to the procedure as an argument (ByVal Target As Range).
In the attached file, in the top left cell (A1), an event handler is used to count the number of times the Excel in the Calculator is accessed. If you reset this simulation and run it, it will add 7 to A1 because the recycle loop takes 7 iterations to converge.
Keywords: Aspen Plus Excel Calculator VBA macro
References: None |
Problem Statement: How to create a Trend using the Data Trending function. | Solution: 1. From the main menu tab click to open the Capital Cost icon. (highlighted in the image below)
2. Select the Interactive Reports report type
3.Open the Trend options.
If starting a new Trend, select Clear All Saved Trends menu option. A confirmation dialog box will appear, Click Yes to confirm clearing of the data.
Select Add Trend Data to Database to add the scenario data to the trend database.
4.Select Create New Trend in Excel.
The Export Trend Data into Excel dialog box will open with the choice of either appending the trend data to the existing file or creating a new file. Select as appropriate.
Then a dialog box will open with capital cost categories of data to be created in the Excel. Select as required. This will create a new Excel Report.
5. Select to View Existing Trend Data which will open the Excel report.
Once an Excel report has been created, subsequent trends will be added to the same Excel report unless the option to Clear all Saved Trends is selected.
Scenarios can be added from the Aspen Icarus reporter of any of the 3 EEE suite products (ACCE, APEA and AIPCE).
To add a different scenario to the Excel report, open the scenario izp and follow the steps described above.
Please note that for the scenario(s) compared to be uniquely identified in the Trend report, ensure that the project title is entered via the General Project Data window (Project Basis View| General Project Data).
Data Trending excel report will identify each scenario based on the name entered as Project Title
Keywords: Trend Data, Trend, Aspen Icarus Reporter.
References: None |
Problem Statement: How to create a Trend using the Data Trending function. | Solution: 1. From the main menu tab click to open the Capital Cost icon. (highlighted in the image below)
2. Select the Interactive Reports report type
3.Open the Trend options.
If starting a new Trend, select Clear All Saved Trends menu option. A confirmation dialog box will appear, Click Yes to confirm clearing of the data.
Select Add Trend Data to Database to add the scenario data to the trend database.
4.Select Create New Trend in Excel.
The Export Trend Data into Excel dialog box will open with the choice of either appending the trend data to the existing file or creating a new file. Select as appropriate.
Then a dialog box will open with capital cost categories of data to be created in the Excel. Select as required. This will create a new Excel Report.
5. Select to View Existing Trend Data which will open the Excel report.
Once an Excel report has been created, subsequent trends will be added to the same Excel report unless the option to Clear all Saved Trends is selected.
Scenarios can be added from the Aspen Icarus reporter of any of the 3 EEE suite products (ACCE, APEA and AIPCE).
To add a different scenario to the Excel report, open the scenario izp and follow the steps described above.
Please note that for the scenario(s) compared to be uniquely identified in the Trend report, ensure that the project title is entered via the General Project Data window (Project Basis View| General Project Data).
Data Trending excel report will identify each scenario based on the name entered as Project Title
Keywords: Trend Data, Trend, Aspen Icarus Reporter.
References: None |
Problem Statement: Does AspenTech support OPC UA? | Solution: Support for OPC UA was added in V7.3. The User's Guide can be downloaded from these locations:
V7.3 - KB 131905
V8.0 - KB 135073
In addition there is a zip file containing the Help file and two sample applications which can be downloaded from KB 133076.
Additional Information
OPC UA (the 'UA' stands for "Unified Architecture") is described in more detail on the OPC Foundation website.
Keywords: unified architecture
References: None |
Problem Statement: Attempts to override the calculated pipe paint area by entering user specified pipe paint area via pipe component option in the Paint input form:
gives an ERROR> 'PIP- 1' PAINT AREA INPUT IGNORED - NOT APPLICABLE FOR PIPING.
What is the work around? | Solution: The user cannot change the paint area estimated by ACCE but they can change the material costs and labour hours by making component % adjustments:
Related Knowledge Articles to how Paint areas are estimated :
KA 144679-2 Why does Aspen Capital Cost Estimator (ACCE) generate more Area of sand blasting, primer paint and top coat paint than the Actual Area of a straight length pipe?
KA 127543-2 Why do Aspen Capital Cost Estimator version 2006.5 and V7.1 give different results in paint (COA 912 and 922) for small diameter pipes?
Keywords: Paint, Pipe Paint Area, Error Message
References: None |
Problem Statement: How is the dielectric constant calculated in the case of a mixed solvent? | Solution: This property parameter is discussed in the knowledge base document number 25347 (“How to report the dielectric constant of a mixture “) and the following important information is provided:
Dielectric constant is not available as a standard property. You can only see the Pure Component T-dependent parameter CPDIEC which is used to evaluate the dielectric constant. Note that the ELECNRTL model must be selected either on the Global property specifications, in a block or section, or at least on the
Keywords: None
References: d sheet of the Properties.
Please note that if the parameter CPDIEC is not available for the solvent, Aspen Plus will take the CPDIEC of water instead.
The dielectric constant of a mixed solvent is the mass average of the pure solvent dielectric constants. Aspen Plus will evaluate this average to decide if there is ionic reaction in the mixed solvent. The Rule of Thumb is that ionic reactions do not take place for a system with a solvent dielectric constant less than 10.
KeyWord: mixed solvent, dielectric constant |
Problem Statement: Aspen Cim-IO activation messages continue to pour into the queue files even though all Cim-IO executables are in the Audit exempt list. This can be confusing to troubleshoot because the manual states that any executable in the audit exempt list will not send messages through to the relational database. The reason is due to the login privileges of the user compared to the login of the Audit Username. | Solution: The audit messages are being generated because the AuditProperties record was configured so that plantap.exe would always generate audit messages when activating records unless it was started under a user account named "AspenTech System (exempt from Audit)".
Looking at an example of an audit trail message, it will indicate the executable activating the record (plantap.exe, for example.) Normally, an audit trail message would not be generated when plantap.exe activates a record because plantap.exe was part of the exempt list found under Audit Applications from the AuditProperties record in the IP21 Administrator.
This issue can be resolved by blanking out the AUDIT_USERNAME fields in the AuditProperties record.
The AUDIT_USERNAME field should either be blank or the name of the user account that the corresponding executable must run under to be considered exempt. Generally, it is left blank which means we don't care which account it is running under.
audit_application audit_userna
-------------------- ------------
IQTASK.EXE
PLANTAP.EXE
H21TASK.EXE
CIMIO_C_CLIENT.EXE
CIMIO_C_ASYNC.EXE
CIMIO_C_UNSOL.EXE
Keysimul.exe
btrend.exe
cimgcsi_bs.exe
keysimul.exe AspenTech
KeyWords
Audit
plantap.exe
InfoPlus.21
Alarm and Event
Keywords: None
References: None |
Problem Statement: Using a query, how can I show when a tag's value changes, along with the point's previous value? | Solution: The attached Aspen SQLplus query joins the history and aggregates pseudo-tables to display when a tag changes value, the value to which the point changed, and how much the value changed. By observing the point's value when the tag last changed, you can also see the point's previous value. The query works with records defined against IP_AnalogDef or IP_DiscreteDef or against records containing the fields IP_Trend_Value, Trend Value, IP_Trend_Time, or Trend Time in the history repeat area.
The query first prompts for a tag name:
Next, the query asks for the search starting and ending times. The default starting time is midnight of the previous day, and the default ending time is the current time.
Then the query requests a period. The query divides the search time span into intervals as long as the period. The default period is ten minutes.
Finally, the query asks if you want to display duplicate rows. The default answer is N.
The query produces results similar to this:
The time stamps show when the tag changed, and the second column shows the value to which the tag changed. The column Range shows the absolute value difference between the largest and smallest values for the tag in the interval. A non-zero range indicates the value changed in the interval.
Keywords: History
Aggregates
Join
Range
rng
Transition
References: None |
Problem Statement: Is it possible with the standard Aspen InfoPlus.21 tagset for a user to save incoming data in a short-term repository as raw, uncompressed data, as well as compressed for long-term historization? | Solution: No, this cannot be done with the standard Aspen InfoPlus.21 tagset.
The only way to accomplish this using the standard tagset is to configure two tag records for every incoming data point. One with compression and one without. The disadvantage is that each tag counts against the licensed total for the system.
Another option of course is to use the Aspen Definition Editor to create a custom definition record.
KeyWords
InfoPlus.21
Administrator
Keywords: None
References: None |
Problem Statement: How to display the current date and time on a Process Explorer Graphic? | Solution: 1. The first step is to create a ScheduledActDef record that updates at a set frequency. The following creates and configures a ScheduleActDef record named "CLOCK" to be used in a Process Explorer Graphic.
a) Create a ScheduledActDef record named "CLOCK".
b) Reschedule_Interval sets the update frequency of the record. Enter the desired frequency such as every minute or every second. ** Important: An update frequency of 1 second is recommended to provide a more accurate time display (i.e. +00:00:01.0) **
c) Schedule_Time defines the start time for the record. Select a start time. (e.g. 30-JUL-17 12:00:00.0)
2. Next the record "CLOCK" can be added for display in a Process Explorer Graphic. To do this
a) Open Aspen Process Explorer Graphics Editor and add a Data Display Field from the Drawing Toolbox.
b) Double-click or right-click on the recently added Data Display field to view the Data Field Properties. On the Data Source tab:
i) Enter the tag "CLOCK" as the Tag Name field.
ii) Enter Schedule_Time as the Attribute field.
iii) Enter ScheduleActDef as the Map field.
c) Apply the changes and click OK. The current system time should now be displayed in the Process Explorer Graphic - allowing of course for the fact that Process Explorer refreshes at a fixed rate which by default is 7 seconds.
KeyWords:
Clock;
Time;
Process Explorer Graphic;
Graphics Editor;
Process Explorer Graphics Editor;
Time;
Keywords: None
References: None |
Problem Statement: Can you specify who e-mails (or sends) an automated web-based report? | Solution: The first e-mail recipient in the report's e-mail list is the sender. So if all reports need to come from the same sender, make sure the sender is listed first in every report's e-mail list.
Keywords: email
e-mail
automated report
sender
References: None |
Problem Statement: This knowledge base article explains why the Aspen InfoPlus.21 Administrator tool can be empty (show no Aspen InfoPlus.21 servers) when opened. | Solution: The appearance of the Aspen InfoPlus.21 Administrator contents is controlled by registry entries. These entries are made when the software is installed.
1. The system must have the most recent version of Internet Explorer.
2. The registry entries can be entered again using regsvr32 (normally found in the Windows\system32 directory).
3. From a command prompt window type (substituting the correct paths) the following two commands:
a. atobject.dll
b. atinfoplus21object.dll
Important: The path to these dlls is C:\Program Files\Common Files\Aspentech Shared and C:\Program Files (x86)\Common Files\Aspentech Shared
The final registration of atobject.dll must succeed before atinfoplus21object.dll. When both commands succeed in order, the user will need to logoff/logon.
Detailed instructions to register the DLL as below:
a. Open command prompt window with ‘run as admin’.
b. Change the file path to C:\Program Files\Common Files\AspenTech Shared\AtObject
c. Use command regsvr32 atobject.dll and press enter. It should give you message saying DLL registration succeeded, as shown below:
d. Next, close and open command prompt again to select path C:\Program Files (X86)\Common Files\AspenTech Shared\AtObject
e. Use command regsvr32 atobject.dll and press enter. It should give you message saying DLL registration succeeded, like the above screenshot.
f. To register the second DLL, open a new command prompt window with ‘run as admin’.
g. Change the path to C:\Program Files\Common Files\AspenTech Shared
h. Use command as shown below.
i. Next, change the command line path to C:\Program Files X(86)\Common Files\AspenTech Shared
j. Use command as shown below:
Each command will cause a message box indicating success or failure of the operation to be displayed. Since the icon does not appear, you will likely see some sort of failure description. These commands can be issued repeatedly to try various changes, if necessary.
Additional Troubleshooting Steps:
If re-registration of the dlls as described above does not resolve the problem. Here are some other troubleshooting steps:
1. Communication between the Aspen InfoPlus.21 Administrator tool and the Aspen InfoPlus.21 server happens via the TSK_ADMIN_SERVER task. Check in the Aspen InfoPlus.21 Manager that the task is running and not set as "Skip During Startup".
In any case stop and start the TSK_ADMIN_SERVER task, from Aspen Info Plus 21 manager. Please be assured that restarting TSK_ADMIN_SERVER will only be applicable for IP21 administrator and will not impact the running database in any case.
2. If another test server on the network is cloned from a ghost image of the primary server, the Administrator may fail to work (shut down any test server to verify this as a potential fix.) This same condition can also cause problems with the history filesets, since they are referenced using UNC paths that reference the nodename.
KeyWords:
administrator
registry
empty IP21 tree
empty IP21 node
Keywords: None
References: None |
Problem Statement: What is the maximum size of the Aspen InfoPlus.21 database? | Solution: The maximum size of the Aspen InfoPlus.21 database is 512 MB. This means the largest number you can enter in the command line parameters for TSK_DBCLOCK for the database size is:
256*1024*1024 - 1 = 268435455
Remember the data base sizing parameter for TSK_DBCLOCK is measured in I*2 (or 2 byte) words.
Keywords: maximum size snapshot InfoPlus21.snp
References: None |
Problem Statement: The I/O Device Wizard allows you to configure, monitor, and turn Aspen Cim-IO logical devices off and on from the Aspen InfoPlus.21 Administrator.
While it can be beneficial to manipulate Aspen Cim-IO logical devices from the Aspen InfoPlus.21 server using the I/O device wizard, some users may view this capability as a cybersecurity risk. This article explains how to remove the I/O device wizard from the Aspen InfoPlus.21 Administrator. | Solution: Use the following procedure to remove the I/O Device Wizard from the Aspen InfoPlus.21 Administrator.
1. Open a command window as an administrator.
2. Navigate to C:\Program Files (x86)\Common Files\AspenTech Shared
3. Enter the following commands to unregister atcimioinfoplus21extension.dll and atcimioobject.dll:
C:\windows\syswow64\regsvr32 /u atcimioinfoplus21extension.dll
C:\windows\syswow64\regsvr32 /u atcimioobject.dll
4. Close and re-open the Aspen InfoPlus.21 Administrator. The I/O Device Wizard will be removed.
You can use the Aspen Cim-IO IP.21 Connection Manager to monitor Aspen Cim-IO logical devices from the Aspen InfoPlus.21 server:
You cannot use the Aspen Cim-IO IP.21 Connection Manager manipulate the logical devices on the Aspen Cim-IO servers.
Use the Aspen Cim-IO Interface Manager on the Aspen Cim-IO servers to manipulate the logical devices on the Aspen Cim-IO servers.
Keywords:
References: None |
Problem Statement: How can you force a repository to shift on the first day of each month so that each file set holds history for one month? | Solution: The easiest way to trigger a file set shift at the beginning of every month is to use Aspen SQLplus to create a CompQueryDef record with a line similar to:
system ' "C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin\h21shift" -r TSK_DHIS ';
Substitute the actual path to the utility h21shift for
C:\Program Files\AspenTech\InfoPlus.21\c21\h21\bin\ and the name of the repository for TSK_DHIS.
Enter 1 in the query record's field #SCHEDULE_TIMES and expand that repeat area. In the SCHEDULE_TIME field, enter the next time for the file set to shift (eg. 01-JUN-11 00:00:00.0) and in the field RESCHEDULE_INTERVAL enter: 1 Month.
Note: Keep the SCHEDULE_TIME in the first half of the month (eg. 14-JUN-11). If the SCHEDULE_TIME is in the second half of the month, then the SCHEDULE_TIME may be set to unexpected values as the length of the month changes. Also, ensure that the maximum size of the file set is large enough to hold data for the entire month to prevent the file set from shifting prematurely.
KeyWords
fileset
archive
shift
schedule
Keywords: None
References: None |
Problem Statement: This knowledge base article describes how to change the computer name (or nodename) of the Aspen InfoPlus.21 (IP.21) server. | Solution: 1. Ensure that you are logged in as the Administrator or a member of the Admin group on the serve.
2. Back up all history archive filesets, history configuration files, the latest snapshot, IP.21 Manager group configuration, Aspen Production Record Manager information and MS SQL scripts (see Solution 101273 for more details).
3. Shut down IP.21 from IP.21 Manager.
4. Change the computer name of the IP.21 server.
5. Reboot the IP.21 server for the change to take effect.
6. Start IP.21 from IP.21 Manager. If you get an error: "The users access permission does not allow the operation", follow the steps in Solution 108170 to solve this problem. During the startup process, there will be an error reported while starting H21 tasks. This is due to the Repositories being pointed to the wrong computer name. Simply choose Cancel to continue loading the database.
7. After IP.21 is successfully started, shut down IP.21 and use the chgpaths utility (refer to Solution 106366 for instructions) to change the fileset and repository paths to the new nodename.
8. Stop and restart IP.21 in IP.21 Manager. This time, there should be no error while starting the H21 tasks.
9. Open IP.21 Administrator, check all repositories and file sets, making sure that the Status Flag fields show Current.
10. Open Data source config tool, add a new data source with the new nodename. Add the following services: Aspen DA for IP.21, Aspen Process Data (IP.21), Aspen SQLplus service component and Aspen Web.21 service component (and others if required, eg. add Aspen Batch.21 BCU service and Aspen Batch.21 service if you are using Batch.21).
11. Open ADSA Client config tool and remove the old ADSA data source.
12. In the ADSA Client config tool, change to the new nodename in the Directory Server. Click "Test" and make sure that the ADSA connection is successful.
13. create new ADSA data source.
14. Open AFWTools, change the URL of the Client and Server Registry Entries to reflect the new nodename.
15. Edit the Hosts file and change the nodename (if necessary).
16. Go to User Manager and find the IUSR_nodename and IWAM_nodename users (and any other users with the old nodename). Change to the new nodename.
17. Open IIS Manager, connect to the new nodename, select Default Web Site, right-click and choose Properties. Select Directory Security | Edit. and change the username to the new account: IUSR_newnodename. This is applicable for both Authenticated and Anonymous access.
18. Edit the cimio_logical_devices.def file on the IP.21 server and change to the new nodename (if necessary).
19. Open Registry Editor. Export the current settings to a backup file (select All for export range). Search for all entries with the old nodename that are related to AspenTech applications, and replace with the new nodename. Changing the nodename for other non-AspenTech registry entries must be done with extreme caution! Close registry editor to save.
20. Reboot the IP.21 server for the changes to take effect.
21. Go to the Cim-IO servers and find the cimio_logical_devices.def file. Change to the new nodename (if necessary).
22. On the Cim-IO servers, edit the hosts file and change the nodename (if necessary).
23. Do a clean restart for all the Cim-IO servers (see Solution 103176).
24. Ensure that data is coming into the IP.21 database.
Once you have verified that IP.21 is able to obtain data from the Cim-IO server/s, check the layered products (Aspen Process Explorer, Aspen SQLplus, Aspen Web.21, etc) on the server and client side, to ensure that they are all working properly.
Changing the computer name may also affect other 3rd party software such as Microsoft SQL Server. In this case, it may be necessary to re-install these software applications. Please consult your IT department for details.
KeyWords:
computer name
change computer name
server name
nodename
Keywords: None
References: None |
Problem Statement: What is the procedure to export tag from Aspen Process Explorer to MS Excel? | Solution: This article describes about the multiple methods that are available, to export process data from Aspen process explorer to Microsoft excel:
1. Copy and paste directly from process explorer to excel sheet. Procedure mentioned below:
a. SELECT the tag from the legend
b. Right click on the plot
c. Select COPY in Excel sheet. This procedure will copy the data of tags in excel sheet
2. Using the excel add-in. Procedure mentioned below:
a. Go to aspen process data tab in excel
b. Select current value, calculated value or historical value, based on requirement
c. On the left hand side, select the tag from tag browser and fill required details like time span, start time, end time etc.
d. Select apply and ok. This will also populate the tag value to excel sheet.
3. Using VB Form. The procedure is detailed in article number 000010666 - "How to export data from a tag in a trend plot to Microsoft Excel using VB Form.
https://esupport.aspentech.com/S_Article?id=kA80B000000CbnaSAC&articleType=Tech_Tips__kav&avId=ka80B000000Cbn1QAC
Keywords: Process explorer to excel
Microsoft excel
Process tag
VB form
References: None |
Problem Statement: How do I execute a query when Aspen InfoPlus.21 starts? | Solution: When Aspen InfoPlus.21 starts, TSK_DBCLOCK sets the field LAST_LOAD_TIME in the record TSK_SAVE to the time TSK_DBCLOCK loaded the InfoPlus.21 snapshot. This field only changes when InfoPlus.21 starts.
Follow these steps to activate a query defined by QueryDef or CompQueryDef:
Increment the field #WAIT_FOR_COS_FIELDS by one.
Expand the repeat area #WAIT_FOR_COS_FIELDS.
Set the empty field WAIT_FOR_COS_FIELD to "TSK_SAVE LAST_LOAD_TIME" and change COS_RECOGNITION to all.
Keywords:
References: None |
Problem Statement: An Aspen InfoPlus.21 IQ task is a program that processes Aspen SQLplus queries saved as QueryDef or CompQueryDef records. The IQ tasks are usually named TSK_IQn, where n is a positive integer. How many IQ tasks should be started? | Solution: AspenTech recommends using only one IQ task to process query records defined by QueryDef or CompQueryDef. Performance usually degrades when dividing query processing between several IQ tasks, especially when running on a computer with multiple CPUs or cores.
When executing a query, an IQ task calls Aspen InfoPlus.21 database API functions. Database access times, including the time needed to lock and unlock the database, are very fast, typically on the order of a microsecond. The Windows operating system calls involved in locking and unlocking are relatively expensive, comprising a significant portion of the overall elapsed time, even when the IQ task is granted immediate access.
One IQ task processing a query locks and unlocks the database many times and may finish executing the query within the allotted CPU quantum time slice if no other process is requesting database access during that time slice.
Multiple IQ tasks running on separate CPUs processing query records simultaneously will contend against each other for database access. This means the system calls related to requests to lock and unlock the database frequently force context switching, which is very expensive. In this case, there is a good chance neither task will get very much of its allocated CPU quantum time slice before a context switch occurs, forcing the IQ task to abandon the CPU.
So it is generally best to have only one IQ task, especially on computers with multiple CPUs or cores.
In some cases a second IQ task might be used to process queries records accessing slower, external systems (ie., relational databases). An extra temporary IQ task might also be helpful when testing a new query record that is under development.
By default, the Aspen InfoPlus.21 Manager starts one copy of iqtask.exe as TSK_IQ1. This should be sufficient for most installations.
Keywords:
References: None |
Problem Statement: This article describes about an example script created using Aspen SQL plus, to return the last known "good" value of a tag historized in Aspen InfoPlus21. | Solution: The script shown below is an example and can be modified/customized further according to user's requirement.
Using SET MAX_ROWS=1 can be used to receive result with only one row.
A query such as "SELECT IP_TREND_VALUE FROM ATCAI WHERE IP_TREND_QLEVEL=GOOD' " reads data from the most recent time backwards.
So, the complete query would be:
SET MAX_ROWS=1;
SELECT IP_TREND_VALUE FROM ATCAI WHERE IP_TREND_QLEVEL='GOOD';
Screenshot attached with query and result:
KeyWords:
SQLPlus script
SQL+
Value Status
Last good Value
Keywords: None
References: None |
Problem Statement: This knowledge base article provides the minimum and maximum timestamps accepted by Aspen InfoPlus.21. | Solution: The minimum timestamp accepted by Aspen InfoPlus.21 is January 1, 1980, and the maximum timestamp is December 31, 2035.
Keywords: minimum date
minimum time
maximum date
maximum time
earliest date
earliest time
latest date
latest time
References: None |
Problem Statement: How to define product sales in integer values? | Solution: In order to set the sales to always be an integer value, you need to use the MIP table where you can impose Mixed Integer Programming on the matrix variables. Define upper and lower integer bounds for the sales variables.
For example, we want to sell URG and UPR in integer values. The sales of URG and UPR are represented by variables SELLURG and SELLUPR. LI and UI are the lower and upper integer requirement respectively.
In this case, we set the LI for sales of URG as 1 and UI as 100, which means sales of URG is now a discrete variable that would take integer values between 1 to 100, i.e. 1, 2, 3, ...100. The screenshot shows how to set the MIP table for integer constraints.
After the MIP table is created, run the model, go to the material sales section in FullSolution, we will see the sales of URG and UPR are integer values.
Keywords: MIP table, Integer value, Semi-continuous variable
References: None |
Problem Statement: What information is displayed in the customer profile report inside Aspen Fleet Optimizer? | Solution: The Customer Profile report displays some information about the station itself, as well as the station’s order and inventory information. Like the Customer Setup report, it displays all of the product and tank information. In addition, it displays the station’s measured inventory, the current orders for the station and the profiles for those orders. This report will print Average Delivery Window lengths and Runout Inventory for each order. This report is especially useful to determine delivery flexibility and long-term inventory reductions.
Keywords: None
References: None |
Problem Statement: How can Aspen Fleet Optimizer warnings manager help my process? | Solution: The Warnings Manager stores warning messages that are created during the operation of Fleet Optimizer. Warnings information can be sorted and printed to help dispatchers address scheduling problems, data problems, or user errors. This function is used to help ensure data quality before planning, scheduling, and dispatching activities.
Keywords: None
References: None |
Problem Statement: When trying to start the task TSK_SQL_SERVER, the following error message is given:
"Error creating event object: error code = 183" | Solution: The reason for this error is mostly because there is a query that contains a system call that doesn't return (e.g. "Write" opens Wordpad). If the system call does not return, the error will occur when trying to start task TSK_SQL_SERVER. To resolve this issue, either stop the application that is hung or reboot the server.
KeyWords:
start
startup
SQL
SQLplus
Keywords: None
References: None |
Problem Statement: How to solve Maxpass error. | Solution: This error means that some properties are still not within a convergence tolerance when PIMS has completed the maximum number of allowed passes.
You could perform the following actions in order to solve this error message:
1. Increase the number of Maxpass. To do this go to General Tab located in the Recursion Model Settings dialog box, to open this dialog box go to Model Settings tab and Click on Recursion, change the MAXPASS value, the default value is 10, we recommend 50 passes. In most of the models if the properties don’t converge at 50 or 100 passes is hardly likely that it will converge at all.
2. Activate Step bounding. In the same dialog box choose the option Step bounding on error vectors of non-converged properties only, we also recommend a value of 5 for Delay MAXSTEP until this recursion Pass (MAXSTEPD).
In general MAXSTEP is used to assist models that have problems in convergence to converge quickly. It imposes bounds on how much the error vectors can change in each pass. Its use can lead to different solution paths and therefore it is possible to get an objective function value that is different than without using MAXSTEP. MAXSTEP can lead to local optimum if it is used when not needed, but can be very helpful in models with many nonlinearities.
In order to solve the Maxpass error withouth activating the step bounding we recommend to follow the next steps
3. Run an optimization with the MAXSTEP settings shown before, to generate a new !PGUESS file. When the optimization is over you just must access the folder of the model, to do this click on the Explore Model icon as shown in the image.
Selecting the last !PGUESS file generated, and changing the name of the file to PGUESS new as shown in the next images:
Then go to the PGUESS table located in Recursion on the table tree on PIMS and suppress the existing pguess worksheet, and add the pguess file you have just modified.
Keywords: None
References: None |
Problem Statement: Why does peak mass flowrate of adiabatic depressuring case is higher than that of fire depressuring case? | Solution: When user wants to design emergency depressuring system with HYSYS depressuring analysis, calculation result of fire depressuring usually decides the size of orifice or valve. However, user sometimes observes that the mass flowrate of adiabatic depressuring case is even higher than that of fire case.
If you see the 'Universal sizing method' which is a basic calculation option for valve CV in HYSYS, you can realize that the size of valve (CV) is proportional to volumetric flowrate and inversely proportional to square root of pressure drop.
Please see the article "What sizing equations are used in Fisher valve of Depressuring Utility or valve?" (https://esupport.aspentech.com/S_Article?id=000030894)
When user calculates adiabatic depressuring case, initial temperature of the case is usually lower than the temperature of the fire case, which means the depressuring fluid has higher density. So, even though its mass flowrate is higher, its volumetric flowrate is actually lower than that of the design case. That is the reason why user can accept higher mass flowrate without changing the size of valve in adiabatic case.
(Left Fig. : Adiabatic, Right Fig. : Fire)
Keywords: HYSYS, Depressuring, Depressuring utility, adiabetic, fire
References: None |
Problem Statement: After the Aspen Process Data Add-In (AtData.xlam) has been added into Microsoft Excel (via Excel's Tools | Add-Ins menu option), it takes a long time for Excel to start. | Solution: The Aspen Process Data Add-In allows S95 tag aliases to be used, if configured, so it attempts a connection to the S95 tag alias database during startup. If an S95 tag alias database is not being used, this connection will fail once the default timeout has been reached.
The default timeout is initially set to thirty seconds.
For sites not using S95 tag aliases, this timeout should be reset to a smaller value, such as one second. This value is set in the registry using the key:
HKEY_LOCAL_MACHINE\Software\AspenTech\ProcessData\S95\TimeOutInSeconds
This key's "Value data" can be specified as 1.
Additional steps.
By default the Aspen Excel Add-In is going to try to connect to an S95 database on the local system. This can be disabled completely by following these steps:
1. In Microsoft Excel, go to the Aspen menu or ribbon, and then select Process Data -> Options/Help,
then select the Data Source tab
2. Click the Server option and select the name of your data source from the pull-down list.
3. Make sure the "Use IP.21 Process Browser Server" is not checked.
4. In the S95 WebServer box, leave the http://, and click on the Set button.
5. Exit Excel and start it again.
Keywords: Excel
Add-In
slow
start
References: None |
Problem Statement: How can Aspen Fleet Optimizer help improve profit margins? | Solution: The ability to evaluate system functions and make adjustments to improve the efficiency and cost-effectiveness of operations is vital to fuels marketing operations as well as making adjustments to meet business goals. Operations reporting gives fuels marketing organizations the ability to evaluate and improve daily functioning at terminals. Management reports are also needed for high-level evaluations and long-term planning. In turn, these analyses can align and improve business practices, as well as support logistics improvements, operational integration, and maximize margins.
Keywords: None
References: None |
Problem Statement: How can I see customer setup information inside Aspen Fleet Optimizer? | Solution: The Customer Setup report displays all setup information for a customer that was entered during setup in Fleet Optimizer. This report allows you to access information about a particular customer's tank sizes or sales cycles. It displays all of the products available at that station, and the corresponding Safe Fill, Pump Stop, Minimum Order and Average Sales information for each product. It also reports the Retain and Runout information, station hours, and the sales percentage for each segment of the day. Lastly, it provides daily Trending Data (one week at a time) for each of the station’s products.
Keywords: None
References: None |
Problem Statement: How do manual orders get delivery windows inside Aspen Fleet Optimizer? | Solution: Delivery window is the time span between the retain and runout points for a product at a customer location. The procedure of manually entering the retain and runout time is called “Setting the Delivery Windows.” For all order-entry customers, you can manually adjust the delivery window to determine when a shipment is delivered, in order to satisfy customer delivery requests. Because Fleet Optimizer does not manage the inventories of order-entry customers, it cannot calculate delivery windows for these customers. By changing either the delivery date or shift, the retain and runout time automatically change to match modifications within the Replenishment Planner and RSO. Once orders are entered into the replenishment planner, the primary planning for the manual customers is complete.
Keywords: None
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the following errors:
Failed to create server object! An outgoing call cannot be made since the application is dispatching an input-synchronous call.
Failed on acquiring server interface: An outgoing call cannot be made since the application is dispatching an input-synchronous call.
which may be recorded in the Aspen Production Record Manager (APRM) Application Interface log file.
The user may also receive the following error when trying to connect to the APRM server using the Aspen Process Data Administrator on the client machine:
Failed call to CoCreateInstanceEx (0x80040154) Class not registered.
One other symptom worth noting is that the Batch Query Tool is working fine on the APRM server. | Solution: The root cause of the problem is that on some APRM servers ordinary users do not have permission to access the APRM server directory “C:\Program Files\AspenTech\Batch.21\Server\” . Specifically, the users need access to the Batch21Services.exe executable.
This is because the NTFS permissions on the above-mentioned folder are not set correctly on some Windows Server 2012 R2 systems.
In order to resolve the issue, assign Read & Execute, List Folder Contents, and Read permissions for the folder “C:\Program Files\AspenTech\Batch.21\Server\” to “Authenticated Users” and to “Everyone”.
These permission changes are required so that remote users who do not have Admin rights on the APRM Server could access Batch21Services.exe and successfully run the BQT.
KeyWords
Access denied
Keywords: None
References: None |
Problem Statement: What is the meaning of "No Wetbulb calcs for two phase conditions" warning message in the saturate unit operation? | Solution: In this case, if the Inlet stream has two phases (vapor & liquid), "No Wetbulb calcs for two phase conditions” message will appear.
Also, the wetbulb temperature won’t be calculated by HYSYS. This is because it contains some liquid.
However, when you have a total vapor Inlet, the warning message “No Wetbulb calcs for two phase conditions” will disappear.
Also, the wetbulb temprature will be calculated by HYSYS for a single phase.
Keywords: Aspen HYSYS, Saturate, Wetbulb, warning
References: None |
Problem Statement: Performing clean restarts between Aspen InfoPlus.21 and Aspen Cim-IO servers collecting data from Foxboro AIM API OPC servers or turning ON IO_RECORD_PROCESSING for transfer records reading data from Foxboro AIM API OPC servers can take a very long time, possibly failing to complete.
How can I reduce the time it takes to perform a clean restart or to turn IO_RECORD_PROCESSING ON for transfer records associated with Cim-IO for Foxboro AIM API OPC Servers? | Solution: The Foxboro AIM API OPC server allows you to create a file named alias.cfg that contains a list of addresses. The Foxboro AIM API OPC server opens connections to the addresses in the file and keeps the connections open for subsequent accesses by third party OPC clients like Aspen Cim-IO for OPC.
Attached to this solution is a query named CreateAIMAPIAlias to create alias.cfg for you.
The query asks you to
Enter IO_MAIN_TASK name for Foxboro AIMAPI OPC logical device:
After receiving this information, the query selects the field IO_TAGNAME from all GET transfer records (i.e. records defined by IOGetDef, IOLongTagGetDef, IOLLTagGetDef, IOGetHistDef, IOUnsolDef, IOLongTagUnsDef, and IOLLTagUnsDef) associated with the logical device, removes the station name from the field, and adds the address to alias.cfg
After executing the query, copy alias.cfg from the Aspen InfoPlus.21 Group200 folder (usually C:\ProgramData\AspenTech\InfoPlus.21\db21\group200) to D:\opt\aim\bin\ on the application workstation (AW) hosting the Cim-IO server. (D: seems to be the standard installation drive for Foxboro software on application workstations).
On the Aspen InfoPlus.21 server, toggle the field IO_DEVICE_PROCESSING to OFF in the Foxboro AIM API OPC logical device record. This should delete the scanning list for the device on the Aspen Cim-IO server.
Next, stop the Cim-IO client tasks (usually TSK_M_devname and TSK_A_devname) for the logical device on the Aspen InfoPlus.21 server.
On the Aspen Cim-IO server, stop the Aspen Cim-IO Manager service and change the startup type of the Aspen Cim-IO Manager service from Automatic to Automatic (Delayed Start).
Next, verify the scan list for the logical device has been deleted. The scan list is normally located in C:\Program Files (x86)\AspenTech\CIM-IO\io and is named CIMIO_SCAN_LIST.devname. Delete the scan list for the device if necessary.
Reboot the Aspen Cim-IO server and wait for the Cim-IO Server processes (ASYNCDLGP.EXE, CIMIO_SF_SCANNER.EXE, CIMIO_SF_STORE.EXE, and CIMIO_SF_FORWARD.EXE) to start.
Finally, restart the Cim-IO client tasks on the Aspen InfoPlus.21 server and toggle the field IO_DEVICE_PROCESSING to ON in the Foxboro AIM API OPC logical device record to rebuild the scan list on the Aspen Cim-IO server. This should work faster than before.
Note: You should repeat this process whenever adding tags to be scanned from the Foxboro AIM API OPC server to Aspen InfoPlus.21 Cim-IO transfer records.
Keywords: clean restart
AIMAPI
AIM API
Fox
Foxboro
References: None |
Problem Statement: Why I cannot choose fluid package that has new components for a given Column? | Solution: By design, multiple fluid packages must be assigned the same component list to be substituted within the column and due to this, you can see only one fluid package of Basis-1 as below snapshot.
This is already published in the KB: How to change the fluid package for a single or multiple trays in a column?
If user needs to define a new fluid package that has different component list for a given column, use fluid package associations to get it done. In the picture below, you can find “fluid package association” from Home ribbon tab. In case you want to use Basis-2 for DC1 column, you need to change the basis from the drop down list by the name DC1. It is shown by a red border in the following picture.
KeyWords
Fluid Package, Fluid Package Association, Column
Keywords: None
References: None |
Problem Statement: There may be situation in which you want to retrieve information on whether a task is still running on an Aspen InfoPlus.21 server. Other than using Aspen InfoPlus.21 Manager to see the tasks that are running in the Running Tasks section, you may also use the Aspen InfoPlus.21 utility TSK_CLIENT which can be found in the default code directory for Aspen InfoPlus.21 (<drive>:\Program Files\AspenTech\InfoPlus.21\db21\code, where <drive> is the disk drive where Aspen InfoPlus.21 is installed).
By running it in the command prompt in interactive mode with the following command,
tsk_client.exe /i
When prompted to select an option, input 42 which is "Get running task list", it will list out the tasks that are still running.
However, the above mentioned methods required manual intervention. In the case, in which you would like to have this check being done at a regular interval and have e-mails being sent, the above methods does not facilitate that.
This solution aims to address this through the use of a scheduled query. | Solution: There are two methods in which this information can be retrieved.
1. Use Reg (registry) query:
select replace( 'HKEY_LOCAL_MACHINE\SOFTWARE\Aspentech\InfoPlus.21\15.0\group200\RunningTasks\' in line ) as Task
from (SYSTEM 'reg query HKLM\SOFTWARE\Aspentech\InfoPlus.21\15.0\group200\RunningTasks\')
where Task <> '';
2. Use Windows Management Instrumentation (WMI).
LOCAL oReg;
LOCAL regVal;
LOCAL HKEY_LOCAL_MACHINE;
LOCAL strKeyPath CHAr(70);
LOCAL subkey, keys;
LOCAL ver;
HKEY_LOCAL_MACHINE = 2147483650;
oReg = GetObject('winmgmts:\\.\root\default:StdRegProv');
strKeyPath = 'SOFTWARE\AspenTech\InfoPlus.21';
oReg.GetExpandedStringValue(HKEY_LOCAL_MACHINE, strKeyPath, 'Version', ver);
strKeyPath = 'SOFTWARE\AspenTech\InfoPlus.21\' || ver || '\group200\RunningTasks';
oReg.EnumKey(HKEY_LOCAL_MACHINE, strKeyPath, keys);
For Each subkey In keys DO
WRITE subkey;
END
As can be seen above, when using the first method, the version of Aspen InfoPlus.21 needs to be hard-coded so when Aspen InfoPlus.21 is upgraded, the SQL script need to be changed. The second method does not have this issue as it also read from the registry for the version of Aspen InfoPlus.21 installed.
Keywords: registry
running tasks
Windows Management Instrumentation
scheduled query
tsk_client
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the following error messages received when running a query in Aspen Process Data Add-In that takes longer than 60 seconds to complete:
#ERROR 50003: No connect to server '<servername>' (MS Excel message)
and
RHIS21AGGREG: RPC timeout occurred while waiting for the service to reply. (Process Data Add-In message) | Solution: By default, NobleNetRPC layer (the communication channel between ProcessData and the Aspen InfoPlus.21 server) has a 60 seconds RPC timeout for each call. If you ask for a lot of data that requires the InfoPlus.21 server to spend more than 60 seconds to gather then the call will time out.
If that's the case, users can increase the RPC timeout. To do so, please download and follow the attached document titled: How to increase the RPC timeout.
Note 1: This solution is applicable ONLY to Aspen Process Data Legacy Add-In and the COM Add-in when it is being used with Process Data instead of Process Data Service.
Note 2: By default the COM Add-in uses the Aspen Process Data Service which does not rely on the NobleNetRPC layer. The default timeout for the COM Add-in 300 sec (5 min).
Keywords: None
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the following error:
IP.21 is not accessible from 'server_name' or is not currently running on 'server_name'
which may be encountered when a user attempts to scan tags from the aspenONE Process Explorer server or create a trend. | Solution: The above-mentioned error message indicates that aspenONE Process Explorer cannot communicate with the Aspen InfoPlus.21 server. This is most likely because one or both of the following tasks, TSK_ORIG_SERVER and/or TSK_DEFAULT_SERVER, are either not running or are hung.
To resolve the issue, open the Aspen InfoPlus.21 Manager GUI and check if the above-mentioned tasks are running. If they are not running, please start them making sure they are not set to be skipped during startup. If they are currently running, please restart both tasks.
Note: Starting with version 10 and subsequent versions, TSK_ORIG_SERVER is now handling some database calls for aspenONE Process Explorer.
Keywords:
References: None |
Problem Statement: How can a query update the current value and most recent trend value for a tag without inserting another reading into history? | Solution: Attached to this article is a procedure named UpdateLabSample.
The parameters for UpdateLabSample are:
UpdateLabSample (SampleRecord record, SampleTime timestamp, Sample real)
SampleRecord - Name of a record defined by IP_AnalogDef
SampleTime - Time stamp of the value to be inserted or updated
Sample - Real value to be inserted or updated
This procedure updates a laboratory sample defined by IP_AnalogDef. Data compression must be disabled for the procedure to work correctly.
If SampleTime is more recent than IP_INPUT_TIME in the sample record, then this is a new sample, and the query updates IP_INPUT_VALUE for SampleRecord and sets IP_INPUT_TIME to the SampleTime which forces an insertion into history.
If SampleTime is older than IP_INPUT_TIME, then the query checks if there is a history recording matching the sample time. If so, then this is a modified sample, and the query updates the history recording. If there is no history recording matching the sample time, then this is a late sample, and the query inserts the new sample into history.
If SampleTime is the same as IP_INPUT_TIME for SampleRecord, then the current value and the most recent trend value must be updated, but a new history recording should not be made. The query accomplishes this by using the function XOLDESTOK to set the record creation time of the tag one minute in the future. This allows the query to update the current value for the tag without inserting a value into history. After trapping the error generated when InfoPlus.21 fails insert the history recording, the procedure uses XOLDESTOK to reset the record creation time to its original value which allows the query to update the most recent historical value with the new sample.
Keywords: XOLDESTOK
exception
update history
insert into history
References: None |
Problem Statement: You can automatically bind CalcScript parameters to InfoPlus.21 tags when you create or modify calculations. Before you do so, you must ensure that the names of the parameters match the names of the tags and enable automatic binding before you create or modify a calculation.
Note: Automatic binding can be enabled by clicking View | Options | AutoBinding tab and selecting the “Enable AutoBinding” check box.
However, autobinding of parameters to InfoPlus.21 tags will not happen if the tag names contain a special character. It’s worth noting that the user will not get any error or warning messages that autobinding was not successful.
This Knowledge Base article provides steps to resolve this issue. | Solution: If a tag name contains one or more of the special characters listed below, you must enclose the tag name in {} (braces) for the autobinding to work.
Examples of Special Characters
· Full stop or period or dot (“.”) . Example: xxx.yyy. Use {xxx.yyy}.
· 'Tab' character at the end of a record name. Locate the record on your source system using the Aspen InfoPlus.21 Administrator and remove the special character by re-entering the record name.
· Hyphen "-"
· Open ("[") and closed ("]") brackets
· Comma (",")
· The “at” sign ("@")
· A forward slash ("/")
· An asterisk ("*")
· The ampersand sign ("&")
· Double quote (")
Keywords: Special characters
Reserved characters
For more information, please see KB 16987 https://esupport.aspentech.com/S_Article?id=000016987
References: None |
Problem Statement: The results form for the Gibbs reactor has a grayed out (disabled) form called Keq. Is there some way to get Rgibbs to report the Keqs? | Solution: By default, the Keq form is disabled because it is intended to display only restricted equilibrium constants.
To obtain the restricted equilibrium constants, do the following:
On the RGibbs Setup Form's Specification sheet, click on the checkbox for RESTRICT CHEMICAL EQUILIBRIUM
Go the RGibbs Setup Form's Restricted Equilibrium sheet. Click on INDIVIDUAL REACTION.
Specify the restricted equilibrium reactions.
Note that with the Temperature approach or molar extent for individual reactions option, if you do not specify molar extent or temperature approach (that is, all reactions are set to the default 0 temperature approach) then RGibbs ignores the reactions. In this case, no restrictions are enforced on the reactions specified, but the Keq is reported.
Alternative approaches:
Create a Calculator block to calculate and display the calculated equilibrium constants (see Solution Document 23452). This also serves as a confirmation of the Keq calculation.
Use an Requil block (see Solution Document 22915)
KeyWords
equilibrium constant, Keq, RGibbs
Keywords: None
References: None |
Problem Statement: This article shows where to find a comprehensive list of platforms supported by recent versions of Aspen products. | Solution: The link https://www.aspentech.com/platform-support contains a comprehensive list of platforms supported by recent versions of Aspen products.
Keywords: Cross reference
supported operating systems
References: None |
Problem Statement: Does long haul logic have to be turned on for all sites in Aspen Fleet Optimizer? | Solution: Inside the AFO .ini settings you have the ability to designate long haul logic for all of the sites or a limited number of sites. The setting LongHaulStations defines which stations long haul will be applied to based on the logic below.
This option determines if we need to apply the long haul functionality for all stations or only specific stations. * - Long Haul functionality is applied to all stations <statnum>;<statnum> - Station Numbers separated by ‘;’ to apply long haul functionality only to specific stations.
Keywords: None
References: None |
Problem Statement: What are the equations used for determining the simple column tray hydraulics in Aspen Plus Dynamics? | Solution: The simple tray hydraulics equation relates the liquid flow rate from a tray to the amount of liquid on the tray. The Francis weir equation for a single pass tray is used:
where:
QL = Volumetric liquid flow rate from the stage
KWeir = Weir constant
The value used for the weir constant is the same as that used in the rigorous tray rating methods.
LWeir = Total weir length
hCrest = Height of the liquid crest over the weir
The total weir length is specified using the ratio of weir length to column diameter. The default for this ratio is 0.7267.
The liquid crest is the difference between the height of liquid on the tray and the weir height. The height of liquid is the ratio of the volume of liquid to the active area of the tray.
In turn, the active area of the tray is specified as a percentage of the tray area. The default value for this percentage is 90%.
To simulate a single pass tray, use the default ratio of weir length to column diameter (that is, 0.7267), and the default value of active area as a percentage of the tray area (that is, 90%).
Keywords: : Simple column hydraulics, Radfrac.
References: None |
Problem Statement: Installing the Microsoft Excel ribbon-based Aspen InfoPlus.21 Process Data COM Add-in produces the error message "AspenTech.PME.ExcelAddin.ProcessData.dll is not a valid Office Add-in." | Solution: To work around this problem:
1. Make sure you have read/write access to the registry. The COM-based add-ins modify registry settings.
2. Check for the presense of the file Microsoft.office.interop.excel in C:\Windows\Assembly. If this file is not there, go to Microsoft's web site and download the file depending on the version of Excel.
3. Download the attached file exceladdin_regasm.txt to to your system and rename it to exceladdin_regasm.bat.
4. Download the attached file addin.reg.txt and rename it to addin.reg.
5. Open a Windows command window, "Run as Administrator", and execute the exceladdin_regasm.bat batch file.
6. On the same command window execute the command "reg import addin.reg".
Keywords: error, add-in, Exel
References: None |
Problem Statement: How do I execute a query when Aspen InfoPlus.21 starts? | Solution: When Aspen InfoPlus.21 starts, TSK_DBCLOCK sets the field LAST_LOAD_TIME in the record TSK_SAVE to the time TSK_DBCLOCK loaded the InfoPlus.21 snapshot. This field only changes when InfoPlus.21 starts.
Follow these steps to activate a query defined by QueryDef or CompQueryDef:
Increment the field #WAIT_FOR_COS_FIELDS by one.
Expand the repeat area #WAIT_FOR_COS_FIELDS.
Set the empty field WAIT_FOR_COS_FIELD to "TSK_SAVE LAST_LOAD_TIME" and change COS_RECOGNITION to all.
Keywords:
References: None |
Problem Statement: This article describes how to manually configure an SPC record using the Aspen InfoPlus.21 Administrator. The GUI wizard QConfigUtility.exe uides the user in setting up Q records defined by Q_XBarCDef, Q_XBarCSDef, Q_XBar21Def, and Q_XBarS21Def. This article uses the fields in Q_XBarDef as an example, which can only be manually configured. | Solution: For the initial setup, create a record defined by Q_XBarDef, give it a record name and fill in the DESCRIPTION and PLANT AREA fields. Provide the Aspen InfoPlus.21 record name used as the input source for this KPI. The fields are TREND_VALUE_FIELD and TREND_TIME_FIELD.
The next set of fields are called the SPC Variable Control Properties by QConfigUtility. These fields determine how the SPC record will process the data. Q_STD_SUBGROUP_SIZE Selects the number of samples in each subgroup.
Q_SUBGROUPS_IN_CALC Selects how many subgroups will be used in limits calculations.
Q_LIMIT_UPD_TRIGGER Selects how limits are recalculated. There are three choices:
User-Specified
Calculate Once Using CIMQ_MEAN – limits are calculated one time after the Q_SBGRPS_BEFORE_CALC condition has been met.
Recalculate Periodically – limits are recalculated periodically, based on the Q_LIM_RECALC_PERIOD and Q_SBGRPS_BEFORE_CALC values.
Q_LIM_RECALC_PERIOD Specifies how often limits calculations will be performed.
Q_SBGRPS_BEFORE_CALC Designates how many subgroups must be acquired before limits calculation can begin.
Q_INDIVIDUALS_METHOD Chooses how the range is calculated for single sample data. There are three choices:
ESTIMATED SIGMA – The estimated value of sigma is stored in the SPC record by the user. Use estimated sigma when minimal data is available when studying a new variable.
ARTIFICIAL RANGE – The range is calculated using the absolute value of the difference between successive subgroups.
MOVING AVG RANGE – The range is calculated from the absolute value of the difference between subgroups within the Q_NUM_MOVING_AVG
Q_NUM_MOVING_AVG Selects how many subgroups will be used to estimate the range with the MOVING AVG RANGE estimation method.
Use known mean and sigma when expected values from the process are known and are to be used to calculate control limits.
Q_MEAN_ESTIMATE Value for the subgroup average.
Q_STD_DEV_ESTIMATE Value for the subgroup standard deviation.
Values that limits a constant, such as an engineering tolerance, that limits the maximum amount of variation from a specification.
Specification limits are required to calculate process capability indices.
Q_USL Upper specification limit
Q_LSL Lower specification limit
Exponentially Weighted Moving Average (EWMA) Alarm Coefficients
Q_EWMA_ALARM_FACTOR Sets the value at which the EWMA is in an alarm state.
Q_EWMA_SMOOTHING Sets the smoothing factor (lambda) that determines the memory of the EWMA statistic. Lambda determines how much information from historical data is applied to the moving average.
Q_ALLOW_PARTIALS? Select between using partial subgroups and subgroups that contain the number of samples specified in Q_STD_SUBGROUP_SIZE.
Keywords: SPC records, SPC record fields, Q_XbarDef record
References: None |
Problem Statement: What does the .ini setting breakUpOverflowSplits do in Aspen Fleet Optimizer? | Solution: This setting is based on dispatcher preference. AspenTech recommends that you have the system automatically separate split shipments to preserve the original order. It is also often easier to resolve the cause of a problem shipment when the split shipment is separated. breakUpOverflowSplits = 1
In this example, the system automatically separates split loads in the overflow list into separate orders.
BreakUpOverflowSplits = 0
In this example, the system leaves the split loads in the overflow list for manual scheduling or exporting.
Keywords: None
References: None |
Problem Statement: What databases are supported by Aspen Fleet Optimizer V10? | Solution: Supported database software:
Oracle 11g server and client.
Oracle 12c server and client.
Microsoft SQL Server 2008 R2 SP4 and client.
Microsoft SQL Server 2012 SP2 and client.
Microsoft SQL Server 2014 SP1 and client.
Microsoft SQL Server 2016 and client.
If you are using an Oracle database and are not installing products on the database server computer, use the Oracle Net Manager applet to create a Local Net Service Name. Ensure this name matches the default name used during configuration. If you are using an Oracle database, ensure that Oracle ODBC driver 11.2.0.4 or 12.1.0.2 has been installed.
Keywords: None
References: None |
Problem Statement: In Aspen DMCplus collect, there are various syntax that are supported. This article discusses in details about all the supported syntax. | Solution: 1, The first supported syntax is:
'collect/dmcpcollect' +'-v/-noeu' (optional) + <file name > is all options for collect
Where:
-v: Validate the collection list before starting the collection
-noeu: prevent engineering units and description in cle be overwritten
2, there is another option to run collect in the backround, more details can be find in
How to run Collect in the background using mpf_manage
https://esupport.aspentech.com/S_Article?id=000015307
It is also important to know that when you run collect and have a command prompt window open,
i) if you log out of the server, the program is killed, thus stopping your collect list
ii) if you use mpf_manage syntax then it will be running in the background
Keywords: DMCplus
Collect syntax
References: None |
Problem Statement: In order to use an OEE (Overall Equipment Effectiveness) waterfall chart in aspenONE Process Explorer (A1PE) there needs to be an OEEDef record in the Aspen InfoPlus.21 database. From the initial OEE plot in A1PE a user can create an OEEDef record by clicking the 'Create a new OEE record' icon in the upper right:
However, when clicking the icon and filling out the corresponding dialog box:
the following error message appears and the record is NOT generated after pressing the 'Create' button:
Text of error:
Fail to Create NewTestOEE03:
" "
How can this error be overcome? | Solution: Start the web browser (like Microsoft Internet Explorer or Google Chrome) by right-clicking and choosing 'Run as Administrator'. The choice may not be available if invoking the icon from the Windows Taskbar.
Keywords: None
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.