University
stringclasses 19
values | Text
stringlengths 458
20.7k
|
---|---|
Chalmers University of Technology | 6. Hardware development
“Visualize data”. These functions are based on “Analyze blasted material function” in
order to quickly produce reports with valuable information such that the blasting crew
can receive feedback on their work. Another aspect to improve the material in-flow would
Figure 6.2: Function tree: Primary crushing process
be to improve the transportation of the blasted material, represented as the function
“Improve transportation”. Since the crusher is only utilized when it is crushing material,
crusher idle time results in lower throughput of material and wasted power as well. These
tasks will provide a key performance indicator (KPI) to the plant management. They
need to be informed regarding the crusher idle time, time between dumping material and
truck idle times. Function “Inform plant management” and “Measure idle time” covers
these areas.
Eventhoughimprovingtheblastingprocessmaysignificantlyincreasethecrusheruptime
and utilization of plant resources, there will always be some form of jamming and other
faults. “Reduce production stops” covers this topic, with sub-functions “Inform crusher
operator” and “Inform plant management”. Firstly, as the crusher operator tends to
equipment during their shifts, they may not know that there is a stop or some other
fault with the crusher. As such, this function aims to inform and prioritize alarms via
its sub-functions “Predict jamming of crusher”, “Detect jammed crusher”, “Analyze ma-
terial flow” and “Detect unusual crusher behaviour”. These functions would provide the
operator with status notifications of the crusher and allow the operator to act accordingly.
As most organizations have a continuous improvement plan, statistics of the crusher
would be an important aspect to deliver. Function “Inform plant management” covers
this, which is achieved through the sub-functions “Statistics jammed crusher”, “Statistics
material flow” and “Statistics blasted material size”. These functions aim to provide such
data for process improvement rather than daily fault alarms like the ones operators tend
to. The crusher should produce material at a certain size. Although not as important
Figure 6.3: Function tree: Downstream process
for a primary crusher in comparison to the last crushing stage, producing material at a
certain size is the main function of the crusher. In order not to cause jamming of the
secondarycrushingstage,thematerialsizeshouldbemonitored. Function“Ensure correct
product size” withsub-functions“Measure product size” and“Measure wear” providesuch
31 |
Chalmers University of Technology | 6. Hardware development
information to the plant. The system will inform the plant management of product size as
statistical data and measure the wear of the crusher. This would both extend the lifetime
of wear-parts and enable better planning of wear-part changes.
6.3 Idea generation
The idea generation was based on the previous findings from the customer needs study,
observations, technology study and the function analysis. As mentioned before, the pro-
cess consisted of brainstorming sessions. They were conducted both during visits to NCC
Tagene quarry and in the office, which allowed for different perspectives due to environ-
ment changes.
Aswasmentionedpreviously, theprocesscanbeviewedinthreestages; Upstreamprocess,
primary crushing process and downstream process. When on-site, all of these stages
were the environment for the brainstorming session. This involved looking at different
placements of the technologies and looking at the different environments of each stage.
As some of the technologies are sensitive to, for example differing light conditions, these
were taken in to account when spawning ideas. Furthermore, the on-site visit allowed to
see opportunities to extend the future functionalists of the intended system.
In fig. 6.4 below, four possible placement of sensors was identified. Here, the sensors
lookingatthetruckcouldbeplacedtogaininformationregardingthesizeofblastmaterial
(1). Sensors could also be placed such that they are looking on to the tipping area and
crusher feeder, which could enable measurement of blast material size and flow which
would be used to predict jamming and provide analytic information(2). The crushing
chamber itself is also a valuable point of investigation. Here, detection of jamming, flow
inside the chamber and measurement of wear parts could be achieved through a sensor
mounted above it looking straight down (3). In order to improve the downstream process
and measure the outflow of the crusher, a sensor could be placed above the belt as well
(4).
32 |
Chalmers University of Technology | 6. Hardware development
2 1
3
4
Figure 6.4: The possible placements of technologies identified on site
As depicted below in fig. 6.5, (1) and (2) are quite similar. (1), which consists of the
different camera systems - single camera, TOF camera and stereo camera - can all be
placed in a similar fashion. Their placement would provide full coverage of all the desired
measurement points elaborated before. Also (2), which consists of a LiDAR system, has
similar placement and ability to measure the desired aspects. These ideas of (1) and (2)
are very flexible in their placement, as they will scan the whole scene at one time and
allow for analysis. Moving on to (3) and (4), they both require some sort frame to keep
them in place. (3) would require the structured light sensor to be encapsulated in order
to reduce the interference of ambient light. Due to the encapsulation, it would only be
suitable for use on the conveyor belt or feeder. As mentioned previously in chapter 5.2.4,
it requires high computation time and perhaps many measurements, so it is questionable
how suitable it would be on the outgoing conveyor. (4), which is the 2D line scanner, has
the opposite problem. It requires constant speed of the object in order to get appropriate
measurements. As such, it would not be an ideal solution for the crusher feeder.
33 |
Chalmers University of Technology | 6. Hardware development
1 2
3 4
Figure 6.5: Implementation areas of each technology. (1): Single camera, TOF camera
or Stereo camera. (2): LiDAR. (3): Structured light scanner. (4): 2D line scanner.
6.4 Concept generation
By combining different ideas from the idea generation, concepts were beginning to take
form. At this stage most ideas accepted without reflecting too much of the realization and
actual implementation of the system. However, to ensure that the team moved forward
efficiently, the concepts were kept at a some what realistic level. All the concepts can be
found in appendix D. With the basis of the idea generation and the technology research,
the different technologies were placed at different stages of the process where it could
be functional and potential solve the given problem. At this stage the focus was on the
general idea of the technology and not on a detail level as that is to be considered on a
later stage once the technologies would been evaluated.
6.4.1 Concept one
Concept one uses a stereo camera to solve all the given functions regarding the measure-
ment. This means that the solution can be applied for all the functions that are required,
such as measuring the material flow, size, position and such. Also, this solution does
not need any further calibration on site that might be required for other solutions. Also
regarding the installation of the system, it requires a low work effort since the placement
of the system is more likely to be dependent on the surrounding environment such as
lights conditions rather than the actual placement such as within a certain distance to
the feeder and/or conveyor belt. This solution also uses an embedded system which makes
the solution even more versatile when it comes to the whole solution. However, this means
that the whole system also needs to withstand the harsh environment and live up the the
IP classification that is required.
34 |
Chalmers University of Technology | 6. Hardware development
6.4.2 Concept two
Concept two is very much similar to concept one. It also uses a stereo camera to solve
all the functions and therefore have all the benefits as concept one when it comes to the
measuring system. What do differs is the way of process information. In this concept
a server base solution is applied. This could be a on- or off-site server that do the
computation of the information. The server based solution can be a good alternative if
high computational power of the information is required that the embedded system may
not provide to give a real time computational.
6.4.3 Concept three
This concept utilize both a camera and LiDAR in order to gather information about
the material. The camera gather 2D information and together with the LiDAR it can
be combined to achieve the size as well as position and wear. This solution requires a
fixed installation that is predefined on beforehand. This solution also uses an embedded
system which makes the solution even more versatile when it comes to the whole solution.
However, thismeansthatthewholesystemalsoneedstowithstandtheharshenvironment
and live up the the IP classification that is required.
6.4.4 Concept four
Concept four consist of a single camera and a 2D line-scanner. The 2D line-scanner will
collect the 3D data to calculate such as size and wear while the camera will be used to find
the position as well as measuring the flow of the system. This will require a fix installation
as well as on-site calibration for the camera. This system utilized an embedded system
so the system can be more versatile when it comes to placement.
6.4.5 Concept five
Much like the concept four, concept five uses both a single camera and a 2D line-scanner.
All the information gathering is done with the same methods and it requires the same
installation procedure when it comes to the vision system. What do differ is that it utilize
a server instead combined with a app to provide the operator the information is needed.
6.4.6 Concept six
Concept six is based on a TOF-camera. This solution is much like concept one and two
when it comes to the requirements of installation. It can be installed almost everywhere
withouttheneedofcalibration. Alsoasmentionedinchapter5.2.5itcanbeusedoutdoors
as well as indoors without extra lights which makes it a very flexible solution. Since the
TOF-camera doesn’t require as much computational power an embedded system is used
together with a dedicated monitor.
6.4.7 Concept seven
This concept has both a stereo camera and also a 2D line-scanner. As the 2D line-scanner
will not be enough to provide all the information, it would make it ideal to combine it
with a stereo camera. This would most likely ensure that the point cloud is perfect but
35 |
Chalmers University of Technology | 6. Hardware development
also ensures that the stone properties such as color etc is retained for analytic purposes.
Since this system would require a high computational power a server would be the ideal
case together with the visualization on the web that the operator can access anywhere.
6.4.8 Concept eight
The last concept, concept eight, is based on a structural light scanner. This system
would be placed around a conveyor belt or similar to create a closed environment to
achieve a good and stable measurement. However, this means that the installation will be
much more complicated. Since the system is demanding a higher computational power to
achieve real-time performance, a server was chosen to process information and were the
information is visualized on app.
6.5 Concept screening and evaluation
Once all ideas have been exhausted and all concepts has been created, they need to be
evaluated. This was done in two steps, first by the elimination matrix followed by the
weighted Pugh matrix. In order to determine which concepts that are viable, they need
to go through the elimination matrix first. The result from the elimination matrix can
be found in appendix E. If a concept fails at one requirement it will be removed. As the
table in appendix E shows, concept one, two and six passes the elimination matrix and
therefore moves on to the next evaluation stage. Concept three, four, five and seven are
eliminated due to the price. Therefore, they could be discarded for further evaluation
with this approach. Furthermore, concept eight failed due to the fact that the technology
would not be able to handle the amount of data.
The concepts that passed, concept one, two and six, need to be further evaluated in order
to make a decision on which concept to further develop. As the remaining concepts fulfill
the requirements, the next step is to compare and evaluate the desires. This is realized
with a weighted Pugh matrix that can be found in appendix F.
The result from the weighted Pugh matrix shows that concept one is the best when it
comes to the overall performance. However, since the total weight of all criteria is 143,
the other concepts falls close behind concept one. This means that concept two might be
more beneficial in some cases where there is a need for more data collection. This also
means that there is a need for higher computational power and that there is already a
control room close by where the server can be placed. Even if concept three falls close
behind with respect to the score, concept one outperform it in more vital desires such as
Measure individual rocks - Target dimension accuracy, therefore making this concept less
desirable.
Withtheresultfromboththeideagenerationaswellasconceptgenerationandevaluation,
concept one is chosen for further testing and development. The concept does not only
show the most potential but also proves to be highly versatile when it comes to placement.
As highlighted in chapter 3, even if the crushing process may look the same at different
quarries, the setup and prerequisite may vary considerably.
36 |
Chalmers University of Technology | 7
Software research
The software reserach involves finding methods for solving the stereo vision task. This
includes processing the information with a computer for real-time calculation, solving the
stereo vision problem from a hardware perspective and a algorithm perspective, finding
methods to obtain flow of pixels in an image and the segmentation of individual objects,
in this case rocks. This chapter briefly presents methods for these tasks. The findings in
this chapter are then used for the subsequent software development, where the methods
researched are implemented in to software to solve the functions presented in chapter 6.2.
7.1 Processing approach
The most popular processing architecture for general purpose computing is the use of
one or several central processing units (CPU). Usually, the CPU is the only process-
ing hardware available in a device and as such it has to handle not only the software
for computing the task at hand, but also to handle the background processes such as
the operating system (OS). Modern CPUs have multiple cores (processing units), called
multi-core processors, which has enabled execution of multiple instructions in in parallel,
depending on the instructions. In general, one can say that –for a highly parallel tasks–
having more processing cores results in faster computation. See figure 7.1 for an intuitive
explanation of multi-processing on a quad-core CPU.
37 |
Chalmers University of Technology | 7. Software research
Figure 7.1: A quad-core (4 core) CPU running 6 processes. The CPU simultaneously
executes 4 threads (T2 in orange, T5 in gray, T7 in yellow and T4 in green) from 4
processes at the moment and the remaining two processes are on stand-by to receive CPU
time.
Graphics processing units (GPU) have become an important topic for image related tasks
as they have the advantage of having hundreds of “processor cores” on their silicone die.
GPUs used for general processing are known as general purpose graphics processing unit
(GPGPU). On the chip, they have multiple processors called Streaming multi-processors
with a large array of cores. As there is a greater number of cores and that they are
optimized for graphics processing tasks they become very powerful for image processing
and computer vision.
As the intended hardware solution for the final product was chosen to be an embedded
system together with a stereo vision camera setup, an ideal solution would be to leverage
the inherently parallel task by using a multi-core system with both CPU and GPU cores.
The current market leader in this segment is Nvidia Corporation with the Jetson TX2
embedded system. It features two multi-core ARM processors and a graphics processor
on board. The development of such a system would involve traditional programming
methods but also Nvidia’s CUDA architecture. The CPU is still the main processing unit
which handles all system related tasks, the applications and communication with other
hardware. The GPU is a stand-alone processing unit which is fed with a data stream from
the CPU. The CPU (host) streams, or uploads, data to the GPU (device) for processing.
The device performs all calculations that were instructed, and then the host downloads
the processed data. When the data is streamed to the device from the host, it is split up
in to kernel grids, thread blocks and threads.
Each kernel grid contains a number of thread blocks, each thread block contain a number
of threads. The threads in this case is similar to threads of a normal CPU. The complete
GPUunitcontainsanumberofstreamingmultiprocessorswhichinturncontainanumber
of cores. Each thread is executed by each core, each thread block is executed by each
streaming multiprocessor and each kernel grid is executed by the complete GPU unit.
38 |
Chalmers University of Technology | 7. Software research
In fig. 7.2, a simple illustration showing the increased amount of processing power for
pixel operations that a modern GPU can offer. As an example, the Jetson TX2’s graphics
processing unit contains one streaming multi-processor chip with 256 cores. As each
thread is executed by each core, it amounts to a total of 256 threads executing pixel
operations at any given time. In contrast, the Jetson TX2 offers 6 CPU cores which can
run1threadsimultaneously. Intheory,theGPUwouldbeabletoprocessanimagearound
42 times faster than the CPU. In reality, one can expect performance gains ranging from
3 to 10 times faster when comparing to a CPU. Furthermore, moving data from CPU to
GPU can take significant time, the data set has to be sufficiently large in order to justify
using the GPU for calculations. In figure fig. 7.2, one can see that the required number
of CPU operations, n, is much greater than the number of GPU operators, m. Hence,
n >> m.
Figure 7.2: Simplified illustration depicting the larger amount of pixels a GPU can
process at once, compared to a CPU.
7.2 Image processing techniques
When it comes to image processing techniques, there are many viable methods that serves
different purposes. Depending on what information is gathered, it can be used in various
ways. One of the more common techniques is edge detection that is used to detect edges,
hence the name, of an object. The core function of the technique is based on a gray-
scale image were a comparison between every pixel and its neighbors is performed. If the
value change drastically it indicates a change which in most cases are an edge, but not
always. There are problems with false edges that can be created by many factors. These
factors can consist of light differs, noise, shadows, distance etc. To avoid this, a controlled
environment is desired but not always easy to achieve. That is why there are multiple
methods available to do edge detection that has their own benefits and drawbacks. This
means that there are no method that is the given choice, it will more be depending on
39 |
Chalmers University of Technology | 7. Software research
case. Some of the most commonly used methods are Soble, Canny and Laplacian. These
methods can be divided into two different types, gradient based and zero-cross based.
The gradient based method utilize a matrix to determined how much the surrounding
neighbors effect the given point. The zero-cross based method utilize the first and second
derivative to find where the value is changing and by that finding the edge of an object.
The result is also depending on the original resolution and its quality in order to achieve a
good result. A higher resolution means more pixels within the image which theoretically
means that the line should be more precis. However, with a higher resolution there is a
risk with obtaining more noise that the algorithm will identify as an edge. To improve
the result of the edge detection there are ways to manipulate the picture. One method
that is commonly used is to smooth the image with a filter, such as a Gaussian filter.
By applying the filter, the noise will be reduced and therefor removing a large number
of false edges and increasing the possibility to detect edges that would otherwise be left
out. Other methods to manipulate the image are usually used, such as changing the
lighting and/or inverting the image. Also, applying different filters such as bilateral filter
or different kinds of blur is a possible way to enhance the edges. However, same as the
edge detection method, there are no given filter or method that is the given choice. It
will be more a matter of testing to conclude which method is more appropriate for the
given case.
Another aspect that is possible to compare is the colour of the image and how it alters
between the pixel and its neighbors. This technique utilizes the information with the
colour that otherwise would be dismissed in methods such as Soble, Canny etc. that
requires a gray-scale image. However, since it uses more information the computational
powerneededwillincreaseandtherefor either take longertimeorrequirebetter hardware.
However, the colour does not differ much between rocks which means that the technique
will not be as favorable as in other industries and cases (C.Akinlar and Topal, 2017).
7.3 Stereo vision
Stereo vision involves using images from two or more cameras to calculate the depth by
meansoftriangulation. Giventwoimageswiththesameobjectineachimage, allpointsof
the object are present i both images but with a certain shift in position. This shift, which
is caused by the image sensor being separated by a certain distance, can be used to obtain
the real-world coordinates of all the matching pixels in both images. By convention, the
left image is used as the reference image. This means that for all objects where the same
pixel exist in both images, the distance between them can be calculated. This distance,
or shift, is called disparity, which is illustrated in figure 7.4. If the object moves further
away from the camera, the disparity increases. Thus, there are three major topics in this
segment, the camera model, the stereo correspondence problem and distance calculation.
7.3.1 Camera model
To simplify the triangulation, a simple Pin-hole camera model with two cameras, parallel
image planes and camera center axis offset called baseline can be used. This is shown in
fig. 7.3.
The pinhole camera model can be described as having a box with a small pinhole on
one wall. The light from the scene will pass through the pinhole and be projected on to
the back wall of the box, as shown in fig. 7.3 below. The point P is a point in the 3D
40 |
Chalmers University of Technology | 7. Software research
world coordinate system, with coordinates (x1,x2,x3). Point C is a point in the on the
2D image plane, with coordinates (y1,y2). Distance f is the focal length of the camera
and is known. When looking in the negative X1 and X2 direction, one can identify
two rectangles. These rectangles directly map point P to point C through the following
relationship. Looking from the negative X1 direction, y1 = −fx1. A similar equation can
x3
be extracted when looking at the negative X2. Thus, the following maps the point P to
(cid:18) (cid:19) (cid:18) (cid:19)
y1 x1
point C = −f .
y2 x3 x2
Figure 7.3: How the pinhole camera aperture projects an image
The pinhole camera model is a very simple representation of a complex physical device. It
does not take in to account the lens distortion, which is an important pre-requirement for
stereo correspondence. Thankfully, as distortion from lenses is constant given the same
zoom and focal length, it can be removed through transforming the captured image by
calibrating the camera from a known checker-board pattern.
7.3.2 Stereo correspondence
The stereo correspondence problem is another key component of stereo vision. As the
triangulation to obtain distance is based on the pixel distance between objects present
in both left and right image, the quality of matching two pixels directly influences the
quality of the distance measurement. The distance is known as the disparity, shown in
fig. 7.4.
41 |
Chalmers University of Technology | 7. Software research
Figure 7.4: In the left image point Pl is investigated. The algorithm finds the same
point in the right image, Pr. In the disparity map, brighter values means that the object
is closer to the camera, darker values mean it is further away.
Therearemanyalgorithmstosolvethestereocorrespondenceproblem. Theareaitselfhas
undergone a significant amount of research as well as development in recent years. While
most algorithms are focused on obtaining the best possible matching, they require a lot
of computation time. In order to achieve close to real-time performance, block matching
algorithmshavebeendeveloped. Foragridofpixelsintheleftimage,asearchisconducted
in the right image to find a matching pair of features or points. This particular algorithm
searches along the row of the chosen pixel for a matching pair. The search area is usually
restricted to within a few pixels. The image regions are then compared using the sum of
absolute differences (SAD) and a matching pair is found. This approach is very efficient
computation wise, but can produce wrong estimations of depth as well as noise. This is
highly visible in images where there are not a lot of features on a surface. The featureless
regions would cause the algorithm to find the wrong match, as all of the pixels in the
search range have equal or very similar values.
7.3.3 Distance calculation
When the depth map has been extracted from the stereo image pair, actual depth can
easily be calculated given the previous assumptions. If the camera lens distortion is
removed by calibration and rectification as well as the two cameras being planar and only
off-set by baseline b, as shown in fig. 7.5.
42 |
Chalmers University of Technology | 7. Software research
Figure 7.5: Triangulation of a real-world point given an image point and camera pa-
rameters.
The depth can be derived from the equal triangles shown in above figure. This would
result in the depth being calculated as follows: D = f∗b. Depth D is the depth from the
d
camera lens aperture to the point on the rock in meters. Baseline b is the camera offset
distance in meters. Focal length f is the the lens property given by the manufacturer of
the lens, in meters. The displacement d is calculated as d =| Pl | − | Pr | measured
from each image planes origo. As a pixel is a physical sensor part of the photo-sensor
array of the camera, each pixel that is captured by this camera would have a size that
is the same as the sensor size. As such, displacement d is in meters. The baseline is the
distance between the camera center axis. This measurement, given the same camera field
of view, allows for objects to be detected either closer to the camera or further away from
the camera. In fig. 7.6, the red area indicates the zone where objects can be fully seen by
both cameras.
43 |
Chalmers University of Technology | 7. Software research
7.4 Optical flow
Optical flow is the apparent movement of an object from one image to another. Much
like the stereo correspondence problem shown earlier, the goal is to match pixels from two
images. As shown in fig. 7.8, there are two components of the optical flow. Magnitude is
the size of the motion vector and angle is the angle from the point in image one to the
same point in image two. A popular algorithm for dense optical flow is the “Farnba¨ck”
algorithm, (G. Farneba¨ck, 2003). In the first frame, a region is selected. In the second
frame, neighbouring regions are inspected to find a match. Then a motion vector between
the origin and the matched region is calculated. The magnitude is the pixel distance.
Figure 7.8 shows a simplified illustration of dense optical flow. In a, the first and second
frame can be viewd. There is a clear shift of the rock towards the upper right corner in
the second frame. Running the algorithm, it would calculate the magnitude and angle of
the points on the rock. Shown in b is the motion vector of the rock’s edge. In c, a color
representation of the movement is shown. In order to visualize the motion vector in an
image, the magnitude and angle is mapped to a hue-saturation-value (HSV) color space.
The magnitude of the vector is the saturation value and the hue shows the angle. The
value color channel is fixed.
Figure 7.8: Illustration of optical flow.
7.5 Image segmentation
Once it comes to image segmentation, there are a many ways to approach the given task.
Even though the approach may vary the idea remains the same. The main idea and use of
the segmentation is to separate objects from each other for reasons such as visualization,
measurement or analyzing purpose. For the given problem, there are two approaches that
remains highly relevant 3D point cloud and watershed segmentation.
45 |
Chalmers University of Technology | 7. Software research
7.5.1 3D point cloud segmentation
With the information gathered from systems that regenerate the 3D surface there are
much information that can be gathered. Systems like stereo camera and LiDAR are just
some of the systems that do have the ability to gather the 3D information about the
surface of the object, more information about these systems can be found in chapter
5.2 3D technologies. The most commonly used technique for segmentation is based on
clustering of 3D points close to an object. This requires a precise depth map. Then
the segmentation can be based around the distance between the points. Meaning that if
the distance between is greater than a certain value, it is assumed that the they are two
different objects. The problem with this technique is that it relies heavily on the quality
of the point cloud.
The other way to use 3D point cloud to do segmentation is a combination between the
previously method as well as taking the distance into consideration. By using the distance
there are a possibility to detect the edge on an object. The problem with this method is
that the methods can “argue against each other” for example if two stones are overlapping
but there are less to none difference in the height between them. The distance technique
will indicate that the stones are the same object while the more commonly used technique
will indicate that they are different object. In theory both of these techniques will work
but usually the biggest problem when using 3D point cloud segmentation is that the
accuracy of the point cloud as well as noise that that will be produced by the system.
The noise can be reduced but it is highly dependent on the system and how much noise
that is being produced.
7.5.2 Watershed segmentation
It is possible to do image segmentation in a 2D picture by using a powerful tool, the
watershed method. This method is based on an iterative process. The idea behind the
method is that it utilizes the high and low peaks of a gray-scale image to determined
whether an object is within the image. It enhances the value for every small peak to
ensure that it won’t blend in with the surroundings. This is often explained with the
analogy of a water filled valley, illustration can be seen in fig. 7.9. If the waters enters a
isolated valley and as soon as it reached the first peak it creates a border, preventing the
water to over flood. It continue to do this for every peak and the border grows until the
highest peak in the valley. The watershed method enhance every small peak to the same
value as the maximum peak has. The problem with the watershed segmentation is similar
to the 3D point cloud. If any noise exists, which it will enviably do, the watershed will
increases theses as well and create objects that do not exist in the original image. This is
often called over-segmentation. Another problem is the under-segmentation of the result.
This often occur if the method has been to limited to much to achieve a viable result.
Similar to the edge detection system, this will be highly dependent on the surrounding
factors to reduce the noise to a lower level. However, this approach has been highlighted
in other industries that faces similar problems were the objects are similar to each other.
Industries such medical were it been used to find and highlight cells were it has proven
great results.
46 |
Chalmers University of Technology | 8
Software development
8.1 Development approach
As previously mentioned in chapter 7, the different methods are highly dependent on the
surrounding environment as well as set-up in general which means that a more traditional
way of development, were system are fully developed before being tested, would not be
a suitable approach. Also as previously mentioned, the different methods show great po-
tential in different areas, meaning they could not be excluded. Approaching development
in an traditional way would be extremely time consuming and not feasible within the
time frame of this thesis. Therefore, a “Design-Test-Build” oriented approach would be a
better choice seeing that the different methods has their own benefits and drawbacks that
would not come completely clear until being tested. Even though this approach is more
meant for development of hardware solutions, it can also be used as a development ap-
proach when it comes to software development. Once the initial design and tests showed
progress the development for the methods continues to further optimize the methods.
Figure 8.1: Design-Test-Build approach
Also by taking the “Design-Test-Build” approach, enabled working concurrently with
different methods but also quickly change or discard different approach once its proven
to be an invalid solution to a certain problem. By designing the solution in different
steps that weren’t depending on each other was a necessary step to ensure that different
solutions could be developed concurrently. This also meant that designing the solutions
in small steps and then testing it to ensure that it functioned before moving on to the
next step. By doing this also meant that the team always had something to fall back on if
something did not function as intended without the need to restart from scratch. Even if
the testing may take up extra time, it is essential to not waste valuable time on solutions
that may not work once fully developed.
49 |
Chalmers University of Technology | 8. Software development
8.2 Measuring material flow
An important factor for the crushing process is the incoming material. There for it should
be natural to measure and evaluate the flow of the incoming material to the crusher, both
in short term but also long term.
8.2.1 Proposed method
Theproposedmethodformeasuringmaterialflowisshownasablack-boxmodelinfig.8.2.
It is an important part of the detection of jamming and provides valuable information for
producing performance reports of the primary crushing process. The subroutine takes a
single video camera stream –Video stream left camera–, a predetermined flow direction
of the feeder –Feeder flow direction– and a region of interest mask. The scene for the
video stream can either be just the feeder, the crushing chamber or both. In the case
of investigating the material flow on the feeder, the physical direction of the flow would
need to be input in to the function from a user interface, since the camera can be placed
in any arbitrary position. In order to reduce the computation time, unwanted areas are
removed by the Region of interest mask. There are two output from this subroutine,
Flow regions and Flow speed. The flow regions indicate where there is flow in the image.
For example, if both the crushing chamber and the feeder is present in the same video
stream scene, the flow regions would be the area on the feeder and the area inside the
crushing chamber. The information would help to extract relevant data for generating
improvement reports. The flow speed provides the current magnitude of the flow vector.
The output can either be off, low speed, medium speed and high speed. As described in
chapter 7.4, the magnitude of the flow is determined by how many pixels the object has
moved between two frames. As such, the output is related to pixels and frames, not
physical distances and time.
Figure 8.2: Black-box model of subroutine
50 |
Chalmers University of Technology | 8. Software development
Each video frame from the stream is subjected to several image processing techniques and
algorithms in order to produce the desired output. The procedure to obtain the output
from the subroutine inputs is shown by a flow diagram in fig. 8.3. First, the frame is
masked off with the region of interest mask. As reducing the number of pixels being
processed directly influences the subroutine execution time, this is an important first
step. The masked image is then altered by changing its brightness and contrast through
histogram equalization. As the dense optical flow algorithm works by matching features
in two images, the contrast change helps to bring out more distinct features of the image.
Figure 8.3: Subroutine flow chart
The optical flow algorithm requires two frame in order to compute the pixel distance
change of a moving object. As such, on start-up, the first frame has to be skipped and
stored so that the displacement can be calculated using the next frame. Subsequently, all
frames are stored and used to calculate the optical flow from the upcoming frames. The
optical flow itself is calculated as shown in chapter 7.4. As only the flow in the feeding
direction is of interest, the feeder flow direction is used to extract only this relevant vector
angle. The remaining flow vectors are then categorized in to both their location, i.e. the
flow region, and the magnitude of the vector and become the output of this subroutine.
8.3 Measuring material size
The result from the software research showed that there are multiple ways to enable
segmentation and measurement objects. However, the testing quickly showed that there
were substantial problems with the generated point cloud. The problem is highlighted in
fig. 11.8, picture A, where the image is taken from above. The problem with the point
cloud is that the points is distributed in layers. When testing, the distance between the
layers was usually at a fixed distance of 5 cm. This became problematic when segmenting
the rocks, since there are no distinct differences between the height of the rocks. Hence,
segmentation of the point cloud is not a viable option with the chosen hardware.
51 |
Chalmers University of Technology | 8. Software development
Figure 8.4: a: The generated point cloud for the overview of the feeder, b: Side view of
the point cloud
8.3.1 Proposed method
Measuring material size is the core function in order to predict jamming of the crusher
as well as provide important and valuable information for producing reports regarding
the result for the blasting process and size distribution. The subroutine is shown in the
black-box diagram, fig. 8.5, where the required inputs are shown in order to produce the
output of the function, the rock size. As shown, the subroutine requires the video stream
from both the right and left camera is utilized in order to calculate the depth map. In
addition, one video stream from the left camera and the region of interest mask is needed.
The mask is applied to reduce the need for extra calculation power in areas that remains
irrelevant. With the these inputs it is possible to perform segmentation in order to extract
the rocks as single objects that can than be combined in order to extract the rock size.
A flow chart of the subroutine is shown in fig. 8.6.
Figure 8.5: Black-box model of subroutine
The output of the subroutine is the rock size for each and every segmented rock. The
52 |
Chalmers University of Technology | 8. Software development
information will then be used in other subroutines to determined what the next step will
be, whether the rock can potentially be causing the crusher to be jammed and/or if the
information should be used for information purposes. This will be further discussed in
section 8.5 Jammed crusher detection and in section 8.6 Visualization and generating
report.
Figure 8.6: Subroutine flow chart
8.4 Measuring wear parts
An important factor for the plant process planning is wear part replacement. As this is
part of routine maintenance, extending the time between wear part changes can increase
the utilization of the crusher itself.
Proposed method
As the crushed material size depends on the sizing gap parameter,shown in fig. 8.7, mea-
suring this can help to increase wear part life, optimize the power draw of the crusher and
ensure that the crushed rock is within specs. The subroutine to measure the wear parts
is quite simple. Two camera streams, Video stream right camera and Video stream left
camera are used in conjunction with a region of interest mask to scan the crusher wear
parts. The output of this scan is the wear parts dimensions, which can be used in later
stages to determine the css, css change over time and the wear over time.
53 |
Chalmers University of Technology | 9
Commercialization
The most important aspect of a product is creating value for the customer. In this
chapter the product offering and return on investment for the end customer is covered,
which gives an idea of the differentiating points of the proposed system given the current
market situation.
9.1 Process improvement
The main selling point for the product is the way it can improve the crushing process
of the plant. As the proposed system would be used as an analysis tool of the primary
process, the product itself cannot improve the process. Rather, it would give the plant
the ability to gain a better overview of the process. It is then up to the customer to act
upon information they receive. As such, any improvement is mainly in the hands of the
customer, not the product. Thus, the selling points of the product would be what kind
of data it can provide and what kind of improvements this would have the potential for.
In fig. 9.1, the level of process improvement and the time it takes for the improvement to
yield results can be viewed. The graph gives a rough estimate of this relationship as well
as where the information is used.
Figure 9.1: Subroutine flow chart
57 |
Chalmers University of Technology | 9. Commercialization
9.1.1 Operator level
The improvements made by an operator can typically be measured in hours. The operator
cantypicallynotimprovetheoverallprocessbymakingtheirowndecisions. Astheirmain
responsibility is to take care of the daily operation of a few machines, their impact would
be seen as keeping the current process running more smoothly. They are also the people
who have direct control over the machines. As such, information regarding alarms and
jamming is a key point for this level. The level of improvements that can be made highly
depend on their experience and knowledge of the process. The system would be able to
leverage their current abilities by providing accurate information and allow them to be
more efficient by providing them with alarms and the severity of the alarm.
9.1.2 Blast crew level
The blasting crews work with drilling holes in the rock and then fill them with explosives.
The amount of holes that are drilled and the amount of explosives used to blast the rock
directly impacts the resources used at the plant. In order to optimize this process, the
proposed system would give the blasting crew a performance report of their work and give
suggestions how to improve the blasting. As there is an optimal range where the size of
blast rock should be, they can see from the report whether they need to use more or less
drilling and blasting. If the performance report tells the crew that the material was too
fine to crush, they would drill less holes and use less explosives in the future. That would
result in the crusher being utilized more and also reduce the drill and blast costs. If the
blasted rocks are too large for the crusher, the crew is using too little drilling and blasting,
which leads to production stops at the primary crusher. As such, the performance reports
would take the process improvement to another level. Although a slower improvement
time, it would still be in the range of a few days to a few weeks for the improvement to
take effect.
9.1.3 Plant management level
Plant management has more control and overview of the process than the aforementioned
teams, but they are lacking detailed data about the process. On this level, larger improve-
ments can be made to the process. The plant management would like an even deeper look
in to their process in order to improve it further. As such, the process flow is an important
parameter. For example, the proposed system would show that the primary crusher runs
empty 25% of the time. The plant management could then investigate further regard-
ing what is the root cause of this. Either they improve by redesign or allocating more
resources to current transportation.
Furthermore, any plant requires planned maintenance of machines. A regular occurrence
is the changing of wear parts for the crushers. Currently this is being conducted by the
operators via visual inspection now and then. The scanning of wear parts would enable
the plant management to see the wear part life and the predicted life left. As such,
planned production stops could be made more efficient and with better precision, as to
reduce the number of unnecessary stops. The improvements would have a greater impact
on the overall process performance and the time-span of the improvements would range
from weeks to months.
58 |
Chalmers University of Technology | 9. Commercialization
9.1.4 Organization level
On the organization level even higher decisions can be made. Metrics that were described
above may not be of value at this level. However, the overall utilization of the crushers
would be of value to them. If the organization operates multiple plants, information
regarding utilization of different plants is of value. This information would allow the
organization to allocate resources between plants and to improve the overall organization
performance. As changes on an organizational level can be rather slow, the time-span for
improvements would be measured in months-years. However, changes at this level would
also lead to higher levels of improvement.
9.2 Return on investment
As with any product, the return on investment is an important factor for the customer.
As described in the literature study, the plants have limited time on quarry permits and
also tight budgets. In order to give an indication of the return on investment time, some
parameter assumptions have been made which may vary greatly depending on the quarry.
Firstly, the assumed profit margin is 6%. Second, an average price of end-product is
assumed to be 20eper ton. Thirdly, the price of drilling and blasting is assumed to be
50eper hole (B. Afum and V. Temeng, 2014). The final price of product and installation
is estimated to be 20 000 e.
Two cases are presented below, which describes possible scenarios where the system has
improved the plant by decreasing production stops and reducing the number of blast
holes.
9.2.1 Case 1: Decreasing production stops
When the vision system is installed, the plant management has noticed that the crusher
runs empty 25% of the time. Also, the operator is often busy with various problems
around the site and can only respond to a jam in 20 minutes. When he finally reaches
the crusher, he can clear the jamming in 5 minutes. Thus, every jamming incident would
take 25 minutes. The management sees that this happens quite frequently as of late,
an average of 3 times per day. The plant management effectivize the transportation of
material and the operator prioritizes jamming of the jaw crusher. The changes are able
to reduce the time the crusher runs empty by 2% and the average production stop from
jamming is reduced by 5 minutes.
Assuming that the plant crushes 5000 tons each day, which is 14 hours long. On average
this results in a production of 6 tons per minute. Thus, reducing the production stops
by 20 minutes per day, this would yield an increase in production of 120 tons per day.
Additionally, when the crusher is able to be fed 2% more each day, that would yield an
increase of 133 tons per day. As such, the increased production yield is 253 tons per day.
Withthepreviouslymentionedparameters,thereturnofinvestmentiscalculateasfollows:
Return on investment = investment = 20000 ≈
increased yield ∗ material sell price ∗ profit margin 253 ∗ 20 ∗ 0.06
70 operating days
59 |
Chalmers University of Technology | 10
Ethical and environmental aspects
As with any product, there are ethical perspective that needs to be considered. As the
crushing processes begins to getting observed and analysis, one have to consider the
consequences it will have, not only for production purpose but also how to deal with the
integrity problems it might cause. With the purpose system, information about every
problem will be gathered. Therefor also knowledge such as, when it happened, how long
was the downtime of the crusher. Even though this is the main purpose of the system,
there will be the potential of tracking whom was in charge when the problem happened.
This is of course essential when major problems happens, but the problem may arise if
workers are being examined for every second that they are working.
As the operator have many thing to attend to, he or she, might not be able to clear the
problem immediately. However, there is a chance that the workers will know that every
second that the crusher is not running, it will be placed upon them in the statistics.
Meaning that they will rush or prioritize the jammed crusher even if other problems has
more importance. Therefore, it is of most importance to ensure that the attitude and the
usage of the product is focus on how to improve the process, rather than pin problems on
the operator. Also with the system, the is a possibility to reduce the chance of corruption
within the system. Even if this may not be a outspoken problem today, one have to
consider how constantly prevent the up come of it. It can be decisions regarding an
investment or the evaluation of teams such as the blasting team etc. With the system,
the fact will be able speak for it self and it will be much harder to hide and cover up
problems or shift the problems upon other areas in the process.
As described in Chapter 3: Literature study, the demand for more environmental friendly
and more efficient processes is constantly increasing. These demands is coming from both
industry it self and but also the buyers is starting to requiring that the end product,
of the material, has a lower environmental impact. One way to do this is by ensuring
that the crushers has a high degree of utilization. As the crusher is constantly running,
the energy used when no material is present can be seen as a waste of energy. As the
proposed solution will present the degree of utilization of the crusher, the information can
give the opportunity to identify which process should be optimized in order to become
more efficient. Such process could be the material transportation to the crusher, solutions
such as replace dump trucks with conveyor belt to achieve a more steady stream of ma-
terial, ensuring that the crusher always have material ready to process. Furthermore, the
purposed solution also give information regarding how efficient the upstream process such
as the blasting process. By optimize the usage of explosives and number of drilled holes,
for the explosives, will decrease the environmental impact. By using to much explosives
and/or to many holes will both have the direct impact of using to much resources in the
blasting process as well as reducing the utilization of the primary crusher. By not use
enough explosives and/or holes, will increase the time the material need in the primary
crusher and therefore needed increase the energy needed for crushing the material.
61 |
Chalmers University of Technology | 11
Prototype development
The project resulted in a prototype system that was installed on the NCC Tagene primary
crusher.
11.1 Differentiation from the proposed solution
As with most prototypes, the functionality is limited in comparison to the proposed
system. Since development time was limited, both the hardware and software level of
functionality differs from the conceptual solutions.
11.1.1 Hardware
Asproposedinchapter6.5Conceptscreeningandevaluation,Conceptonewastheconcept
that showed most potential. Consisting of a stereo camera that is used for capturing
and providing the subroutines with the video streams. As for the prototype, two single
cameras were used, which meant that the prototype needed extra work to function as a
stereo camera, such as calibrations, programming, assembling etc. However, by choosing
two single cameras gave the possibility to approach the problem from different angles.
Furthermore, bothtimeandmoneyplayedasignificantpartofthechoiceaswell. Astereo
camera that would perform similar to the single cameras were not available in reasonable
timeoncetheconceptgenerationwascompleteandthedecisionwhichconcepttocontinue
with was made. Also the price for the given stereo camera was much higher compared
with the single test cameras that were used. However, even though some of the problems
were to be expected such as synchronization between the cameras, the magnitude of
the problem became more than anticipated. The research before the decision was made
showed that the synchronization was possible and should not be a big concern. However,
once the code for the synchronization were tested and optimized, it became clear that the
synchronization would become a bigger issue due to the time delay between the pictures.
This also became more clear once the prototype was in place due to other factors that
had impact on the capturing process. One of factors that were taken into consideration
but became a bigger problem than expected was the vibration that the crusher generated.
This was due to the exact placement of the installation, that was decided in a later stage
and will be elaborated on in chapter 11.2.1 Setup. The outcome was that the vibration
increased the offset that the synchronization created, meaning that the material would
not only be offset in the direction of the flow but also ,due to the vibrations, in other
directions. Furthermore, since the cameras are not IP classified, it was necessary to build
an enclosure to withstand the harsh environment during the testing. Images of the vision
prototype can be seen in fig. 11.1 and fig. 11.2.
63 |
Chalmers University of Technology | 11. Prototype development
Figure 11.1: Overview of the chosen vision system. Both the outside once closed as well
as the internal cameras.
As for the computational hardware, a Jetson TX2 was used. This was also the solution
from the most promising concept and is the proposed solution due to the performance
compared to the price. Also this solution is , as previously mentioned, more versatile
compared to other solutions when it comes to placement of the system which made it
the possibility to place the system anywhere. However, during the development and
testing phase a laptop were used. The main reason for this was due to convenience while
developing and verifying the test. The laptop used the same operating system as well
as the methods, but the main difference is the computational power that will increase.
This will speed up the overall process during development, however, occasional tests
was performed on the Jetson to verify the it performance. Images of the computational
hardware prototype can be seen in fig. 11.2
Figure 11.2: Overview of the chosen computational hardware. Both the outside, once
closed as well as the internal components.
11.1.2 Software
The software differs in functionality from the proposed solutions in chapter 8. The core
functionality of measuring the material size and flow was developed fully and could be
implemented during the on-site tests. The subroutines can operate independently and
64 |
Chalmers University of Technology | 11. Prototype development
they are able to extract the wanted information. As significant time was spent to reach
this functionality on the core functions, the higher level functions of visualization and
detecting jamming was only taken to the concept stage. However, as the core functions
output is used to sense the environment, most of the hard work has been completed.
To develop the detection of jamming to a complete subroutine would not require much
additional effort, as the logic itself is quite simple.
Astheprototypewastestedinthefield, furtherrefinementworkwasconductedduringthe
prototyping stage in order to improve the performance of the software. The core functions
went through significant overhauls, as the tests provided valuable feedback regarding the
software performance.
Forexample, astherewassignificantshakingproblemsofthecameraandthatthecameras
were not frame synchronized, the initial proposal of 3D segmentation could not be used.
This meant a total revamp of segmentation, which resulted in the use of the proposed
method shown in chapter 7.5.2.
11.2 On-site tests
Astherewaslimitedopportunitytovalidatetheactualmeasurementsofthesystemduring
full production of the plant, this portion of the system cannot be fully tested. However,
test in a controlled environment on stationary rocks was conducted and provided an idea
of the true performance of the stereo vision scanning. The functions that were able to
be tested was the segmentation of the rocks and also the material flow. These functions
could be validated by inspection of the video-stream manually and the performance of
these functions could then be evaluated. The only suitable placement of the prototype is
the view showed in fig. 11.6. The placement provided a view of the feeder and crushing
chamber. As the system is a prototype, it should not interfere in any way. As such, the
view down in to the crushing chamber is occluded. The physical installation can be seen
in fig. 11.4.
65 |
Chalmers University of Technology | 11. Prototype development
Figure 11.6: One frame from a test sequence.
The detection results could be identified as being grouped in to 5 categories. Here, the
result is based on the actual boundary of the rock. Category Correctly identified rocks
consists of rocks where the bounding box fits around the entire rock with a margin of
±5% from the actual boundary. Undersized category covers bounding boxes that were
considered between 5% and 15% smaller. The category Oversized would mean that the
bounding box is between 5% and 15% larger. The Cluster category contains rocks that
were, due to segmentation problems, considered to be one object. Lastly, if a bounding
box considered parts of the feeder to be a rock, or that the rocks were not segmented at
all, they would included in the Unidentified rocks category. In fig. 11.7, examples of the
categorization is shown.
Figure 11.7: a: Correctly identified rocks, b: Undersized, c: Oversized, d: Cluster
The results from five test sequences captured on site during full production were analyzed
and the results can be seen in fig. 11.5. The prototype is able to, on average, correctly
identify20%oftherocksinthescene. Theundersizedboundingboxandoversizebounding
box are identified by the camera in 10% and 15% respectively. These rocks are still
considered to be useful for the rock-size measurements. The result also consisted of
68 |
Chalmers University of Technology | 11. Prototype development
clusters of rocks that were segmented together, totaling 27% of the rocks in the scene.
Lastly, the unidentified rocks made up 28% of the rocks. The unidentified rocks were
either not detected at all by the algorithm or detected as internal edges of a larger rock or
that the algorithm falsely detected parts of the feeder as rocks. As such, the unidentified
rocks and clustering of rocks were concluded to be of negative value towards the material
analysisandtheremainderweredetectedasrocks. Theresultingdetectionrateonaverage
is thus 45%.
11.3.2 Rock measurement
Using the point cloud for segmentation showed promising result in the initial testing,
in a controlled environment. In figure fig. 11.8, picture A, the segmentation of two
“stones/rocks” on a even plane, in this case the office floor. As described in chapter
7.5.1 3D point cloud segmentation, the segmentation is based on isolating objects once
the points in the cloud is a certain distance from other points. The rocks could be isolated
and the size of the rock could easily be extracted since every point has known coordi-
nates. The result from the measuring the stones showed that the dimensions differed
of ±2 centimetre between the real dimensions and the estimated with the point cloud.
Several other measuring test were done such as distance to known objects which gave
similar result. This validates the performance and capability of the system to be used as
measuring for the given task.
Figure 11.8: a: The generated point cloud, b: The point cloud once the rocks had been
segmented from the cloud, c: Top view of the point cloud
11.3.3 Material flow
The algorithm is able to distinguish between when there is flow and when there is no
flow with an accuracy of 100%. The different flow speeds were more difficult to varify, as
the feeder during the tests was never ran at a variable speed. As such, the speed of the
material on the feeder was almost constant. What could be seen however, is that when
a rock is falling or when the dumptruck unloads material in the feeder, the speed of the
flow changed. Figure 11.9 shows an example of the material flow on the crusher feeder.
69 |
Chalmers University of Technology | 12
Discussion
In the early stages of the project, little information was known regarding how an actual
production stop looks like and what is causing the stop. As was observed, a production
stop was mostly a result of the crusher being jammed for a prolonged period of time. The
operators themselves would probably not regard some of the identified stops as jamming.
However, as we were constantly monitoring the crusher during the visits, there was a
significant amount of production time wasted when a large rock either got lodged on-
top of the crushing chamber, blocking the feeder, or when a large rock was difficult to
break. The resulting production delay could easily be up to 45 minutes. The stop would
either depend on the operator not actively monitoring the crusher or that they knew from
experience that the rock would be crushed within a few minutes.
As described previously in the thesis, there are several contributing factors for produc-
tion stops in the primary crushing process. As identified by us and as explained by the
operators, the blasting plays a vital role at this crushing stage. We were able to witness
both when the blasting had been too fine, resulting in rocks just passing through the
crusher, and when the blasting was too coarse, such that the crusher got jammed fre-
quently. Thus, feedback for the blasting crews seems to be a very good feature for this
site. However, as the proposed system does not directly control any part of the process,
the actual improvements have to be made by people on site.
At this particular site, the operators were quite skilled and were able to make good
decisions based on the information provided by the plant control system. For them, more
information would most likely result in increasing their efficiency and allow them to take
better decisions. As such, a system that detects a jamming and can report the fault to
the operator would have the potential to significantly reduce the downtime.
When it comes to the blasting crew and also the dump truck drivers, they were hired by
an external company. The operators would complain about their efficiency and lack of
knowledge, especially for the transportation team. For these teams, perhaps the reports
would not be as valuable. However, as the plant management would be able to directly
see where in the process there is a fault, they could take the decision to either hire another
crew or train the crews to perform better. As was described before, these reports would
boil down to higher level decisions about the plant and then be able to trickle down to
fault source.
As with any new product undergoing development, there will always be unforeseen prob-
lems and factors that will have an impact of the performance of the product. The main
question is how big or small these factors are and how much of an impact they will have.
During the thesis a number of factors were discovered that had an impact of the detection
system performance, one of them being the effect of the light conditions. Even though the
feeder and crusher are placed inside a building, in a somewhat controlled environment,
the light had a considerable effect on the detection system. The amount of light coming
from the outside were heavily dependent on both the weather but also the time after the
71 |
Chalmers University of Technology | 12. Discussion
material being dumped on the feeder. Right after the material being dumped, it usually
blocked most incoming light and as the material being process, the light increased. This
meant that there were no optimal setting for the vision system due to the variation of the
light source. This also created more shadows than expected which the system occasionally
picked up as an object.
Other factors that would affect the system but that were expected were the dust and
vibrations caused by the crusher and feeder. During the testing phase, it also became
very clear that the mounting of the system plays a vital role to enable the possibility to
gather valid information. As mentioned, even the slightest vibration will have an effect on
the vision system since the images from the cameras needs to match together. Since the
idea behind the system, when it comes to mounting, is that the system should be versatile
and offer the possibility to be mounted anywhere. However, due to the final placement of
the camera, the impact of vibration were highly underestimated. The magnitude of the
impact from the vibration also was due to a combination between the vibration and the
unsynchronized cameras. As one of the cameras took one picture, more often than not,
the picture from the other camera became of offset due to vibrations. But the vibration
did not only cause problem with synchronization, the picture also became blurry due
the the sudden movement. So even though the problems were expected, the degree of
problem it created were unanticipated. Therefor, it is at most importance to continue to
search for ways to limit or eliminate the vibrations such as vibration dampening mounting
materials.
Furthermore, both before and during the test period there had been less to no rain which
meantthattherewerenothingtobindthedust,causingthedusttobeanextremeproblem
at the quarry. Not only for the testing but also for the quarry and its personal. At the
quarry, there certain measurement had been taken to minimize this, such as watering
roads, the blasted rocks etc. This meant that the water helped reducing the dust levels
but did not remove. This was also due to the extreme weather which made the water
evaporation quickly. As a result, there were excessive amount of dust at the feeder and
crusher during the testing phase. The dust levels were mainly a problem once the rocks
being dumped causing the system to completely lose vision until the dust level reduced,
usually taking between 5 to 20 seconds. However, even if the vision were removed the dust
did not cause any other issues during the testing phase such as sticking onto the surface of
the encapsulation for the lenses. There are reason to believe that the changes of weather
and humidity may cause a problem were the dust get stuck on the surface, protecting the
lens. However, this is just a theory and needs further testing and evaluation how much
of a problem this may cause.
As was shown in the thesis there are many viable technologies that can be used to detect
and measure material. Ultimately a stereo vision system was chosen as being the most
suitable one based on the price and the performance. Also, for the prototype, it gave
the ability to test single camera solutions as well as integrate these with stereo vision.
While the cameras were a bit on the cheaper side, as they did not have the functions
of synchronization between the cameras, they were still a good foundation for evaluating
the performance of stereo vision. However, in a real scenario fully synchronized cameras
are a must. Another important aspect of the stereo vision system is the algorithms used
to solve the stereo correspondence problem. Most of the tested algorithms that were
implemented required high processing time. The problem was even more obvious when
using higher resolution images captured by the camera. The problem could be solved
by both having more capable hardware to compute the images and also to improve the
72 |
Chalmers University of Technology | 12. Discussion
matching algorithm. The results from the prototype stereo vision system seem very
promising, as an accuracy of around 2 centimeter in x, y and z could be achieved with
the unrefined set-up.
The segmentation of the rocks is a another important cornerstone of the product itself.
During the project, traditional methods for edge detection and segmentation were used.
The initial testing of these methods gave quite promising results. However, after trying
most of the classic edge detector methods along with extensive use of other filtering
techniques, histogram equalization and morphological operations, the results are still not
satisfactory for usage in a real product environment. For difficult rocks, i.e. rocks that
were under a shadow or that the edge was slightly blurred, the edge detectors failed
quickly. This resulted in many rocks being segmented together, which creates a false
reading directly.
Furthermore, as edge detectors are based on a 2D image, many internal edges were iden-
tified. As such, the edge detection and segmentation would then give out false boundaries
inside the rock, making it look like there are multiple rocks at this location. The current
systemwould, onaverage, beabletocorrectlyidentify20%oftherocksandaround45%of
the rocks would be considered as good enough for using as correct measurements. When
looking at current systems from competitors and solutions from other industries, they
achieve –what the team would consider a correct measurement – around 70−90% detec-
tion rate. The main advantage of their systems is the in-house developed edge detection
system, which more often than not is based on some sort of machine learning algorithm
such as deep neural networks. These algorithms seem to more intelligently detect the
rock by being trained from manually segmented images. As time was limited during the
project, this was not considered to be feasible given the time-span.
73 |
Chalmers University of Technology | 13
Future work
The future development work is presented in this chapter. It covers the continuation work
of both hardware, software and user experience.
13.1 Hardware
• Mounting: As the concepts has focus more towards the versatility of placement
rather than the physical mounting hardware. This needs further research since the
observation has only been conducted at on quarry which is not sufficient to make a
well based decision.
• Vibration dampening: The result showed that the vibration has a major impact
of the vision system, meaning that vibration needs to be minimized. Therefore,
further testing and research needs to be conducted to limit the vibrations, such as
vibration dampening materials.
• Light conditions: Further testing needs to be done regarding how the light con-
ditions affect the vision system and the ability to gather information. This should
also involve do performance test with extra lights to reduce unwanted shadows and
bring forth edges.
• Mixed weather effects: As the prototype and the testing only was conducted for
a short period of time, there were no possibility to see how the weather affect the
system. Therefore, the system needs to be tested during a longer period of time as
well as during season changing to see how the weather impact the lens protection.
• Embedded system: As the embedded system today consists of a Jetson TX2, a
development kit, there are reasons to further research if other embedded systems
are more suitable for the industrial application. Mainly with higher computational
power due to the current calculation time given.
• User experience: Since the end product will change with respect to the hardware,
the user experience needs to be taken into further consideration. Mainly if the new
hardware require any maintenance due to the combination between dust and mixed
weather effects.
• Potential dust problems: Look into if watering the rocks during the dumping
process will reduce dust. Research if there are any options where a lens filter can
enable the camera to see through dust.
13.2 Software
• Rock segmentation: The edge detection and object segmentation was highly
dependent on image quality, lighting and how the rocks are stacked together, further
research is needed towards a more intelligent algorithm. A suggestion would be to
75 |
Chalmers University of Technology | 13. Future work
look further in to training a artificial neural network (ANN) for improving the edge
detection.
• Data visualization: As mentioned, the Roctim cloud solution would be an im-
portant platform for visualization. Further works is needed to integrate the data
stream output with the cloud.
• Operator alarms: No work has been put in to visualizing and presenting data
for the operator. Thus, a trial application for smart-phones should be developed,
which shows alarm messages and can provide the operator with a video stream of
the event.
• Improve stereo vision algorithm: The current stereo vision is still a bit noisy
and can sometime have trouble with certain surfaces. Reducing the noise would
provide for stable measurements. Also, computation speed could be improved by
optimizing the code and implementing more functions to the GPU pipeline.
13.3 Other usage areas
As a camera system is high flexible and more dependent on the software rather then the
hardware when it comes to be able used it in other areas. This means that with further
development on the software side, the system could be implemented in other crushing
stages, used to measure stockpiles, detect vehicles and people in critical areas within the
quarry. Sincesafetyisahottopicwithintheindustry, thesystemcouldbeusedtoincrease
the safety of the quarry by identifying and alert the surrounding if a vehicle and/or people
are in certain areas. As this was not scope of the project no time was spent developing
such system but the method is already available as it is used in other industries, such as
the automotive industry.
76 |
Chalmers University of Technology | ABSTRACT
The focus of this study is the extraction of heavy metals from wastewater using emulsion liquid
membranes (ELM) in a way that contributes to green chemistry. A more robust ELM system may be
used to reduce the toxic content in industrial effluents and to recover valuable metals. An ELM process
consists of an external phase (feed phase, containing the metal to be extracted), an organic membrane
phase and an internal phase (stripping or receiving phase). The internal phase and the membrane
together compose a w/o emulsion, created through emulsification using homogenizer, and consists of an
organic diluent, a mobile carrier, surfactants, eventual co-surfactants or stabilizers, and a dispersed
aqueous phase containing a stripping agent that reacts with the extracted species. The w/o emulsion is
dispersed into the external phase creating a multiple w/o/w emulsion in which the extraction process
occur.
In this project we propose a novel ELM formulation consisting of the renewable material palm oil as the
vegetable diluent. The mobile carrier TOMAC is included in the membrane to facilitate the metal
transport and our system also incorporates the hydrophilic surfactant Tween 80 that facilitates the
dispersion of the ELM phase in the external phase. Span 80 is used as surfactant and butanol as co-
surfactant. The system achieved a removal efficiency of hexavalent chromium of over 99% when having
an optimal concentration of 0.1 M NaOH as stripping agent and an external pH of 0.5. Important
factors influencing the extraction were found to be the emulsion formulation, the agitation speed, and
the maintenance of a pH gradient between the phases. The stability of the ELM is crucial and needs
therefore further investigations. We also discovered that the type of water (deionized, distilled and tap
water) does not have a significant influence on the extraction rate.
The possibility of extracting pentavalent arsenic with an emulsion ionic liquid membrane (EILM) system
was also explored, when using kerosene as diluent, but without success. However, simple liquid-liquid
experiments with TOMAC as carrier verified the compatibility between arsenic and TOMAC, with the
optimal extraction efficiency at pH 9-10. Therefore a successful formulation may depend on the
formulation of the ELM in terms of the components in the system such as the surfactant and stripping
agent used.
Keywords: Emulsion liquid membrane, palm oil, hexavalent chromium, pentavalent arsenic,
TOMAC, ionic liquid, green chemistry.
III |
Chalmers University of Technology | ACKNOWLEDGEMENTS
We would like to acknowledge the Linnaeus-Palme International exchange programme1 for providing us
with financial support and giving us the opportunity to travel to Malaysia and conduct our experiments
at the University of Malaya in Kuala Lumpur. This has been a wonderful experience, which would not
have been possible without the encouragement from Prof. Claes Niklasson.
We would like to give our deepest gratitude to our supervisors at the University of Malaya,
Prof. Dr. Mohd. Ali Hashim and Dr. N. S. Jayakumar, who welcomed us openheartedly and provided
us with support and help throughout our work and also regarding practical issues during our stay in
Malaysia.
A note of thanks goes to the PhD students Soumyadeep Mukhopadhyay and Yeesern Ng, who
supported us in our laboratory work and helped us find solutions to our problems.
We would like to thank our supervisor at Chalmers University of Technology, Ass. Prof. Anna
Martinelli who supported us in our writing process and provided us with irreplaceable feedback. Our
appreciation also goes to Prof. Krister Holmberg, who kindly embraced the role as our examiner. A
special thanks goes to Jan Rodmar, who helped process our results and provided us with a MATLAB
programme for this purpose.
Finally, we would like to express our regards to our families and friends, who have been a
constant support and security throughout our years at Chalmers. Sanna would like to give a special
thanks to Mattias Wänerstam, who has stood by her side the last five years, providing her with energy
and inspiration.
1 Linnaeus-Palme International exchange programme for education and training and financed by Sida (Swedish International
Development Co-operation Agency)
IV |
Chalmers University of Technology | I ntroduction
1 INTRODUCTION
The removal and recovery of heavy metals from wastewater and industrial effluents is environmentally
and economically driven as much as it is a health issue. Efficient, economic and sustainable methods for
this purpose are required and this project focuses on process intensification and investigation on
extraction of hexavalent chromium and pentavalent arsenic from water. Both chromium and arsenic
constitute a problem for the environment and a threat to human health, and in Malaysia and Southeast
Asia the contamination of groundwater and water resources is a major concern.
The extraction capability of liquid membranes has been used successfully in many areas i.e. metal ion
extraction, separation of inorganic species, and biochemical and biomedical applications [1] and the field
is currently undergoing an expansion in research and in its application as an industrial separation
processes. Emulsion liquid membrane (ELM) is a developed form of solvent extraction, with the
difference that extraction and stripping occur simultaneously in the same stage. At, among other
institutes, the University of Malaya this method is currently being optimized and substantially improved.
One improvement of the ELM system has been the use of ionic liquid as stabilizer of the membrane,
resulting in an emulsion ionic liquid membrane (EILM). The use of ELM for extraction of heavy metals
is a method implemented only to some extent in industries and further investigations of this separation
method are needed before industrial applications are possible on a larger scale. This includes stabilization
studies of the emulsion membrane, improvements of the de-emulsification step and identification as well
as intensification of various parameters influencing the efficiency. It also includes the development of a
robust system that is affected as little as possible by the presence of other ions or impurities in the
wastewater. The possibility to improve the sustainability of the ELM should also be explored, in order
to minimize the use of non-renewable materials.
This work is divided into two subprojects, the first focusing on the development of a novel emulsion
liquid membrane formulation based on a vegetable oil and the second focusing on the use of the already
developed EILM formulation for extracting pentavalent arsenic. In addition, the effect of the purity of
water is explored, by comparing the extraction rate when using water of different pre-treatments.
1.1 Subproject 1: using palm oil as diluent
From previous studies it is known that the carrier tri-n-octylmethylammonium chloride (TOMAC) is
selective for extracting hexavalent chromium and an EILM formulation has been developed and
optimized for this purpose [2]. In order to investigate the possibility of replacing the synthetic diluent
kerosene based on fossil fuel for a renewable material, a system similar to that previously developed was
chosen keeping the metal to be extracted unchanged. Palm oil was chosen as the alternative and
renewable organic diluent, as it is readily available and may contain natural surface-active agents, which
improve the stability of an emulsion [3]. In addition, palm oil has been found to work well for extraction
of phenol using supported liquid membranes (SLM) [4]. Firstly, emulsion stabilization studies were
preferred, as the ELM system demands a w/o emulsion stable for the time required for extraction to
occur, and no optimized formulations were found in literature. Suitable surfactants and co-surfactants
were explored for an optimal emulsion formulation. Secondly, extraction experiments were performed
and investigations of various parameters affecting the removal efficiency were studied.
1.1.1 Purpose
The aim of subproject 1 is to explore the possibility of replacing the fossil-fuel based diluent kerosene in
the ELM for a renewable vegetable oil. If the system works well using the ELM based on the vegetable
oil, the subsequent aim is to optimize the removal efficiency of chromium from water using the novel
1 |
Chalmers University of Technology | Introduction
formulation. The parameters studied for removal efficiency are stabilization of the emulsion, surfactant
and co-surfactant concentration, agitation speed, carrier concentration and stripping agent
concentration.
1.1.2 Issues
Specifically, the following issues are investigated:
Can the petro-chemically based diluent kerosene used in previous studies be exchanged for a
vegetable oil?
How can stability of the emulsion liquid membrane be achieved for a sufficient time, by using
the materials at hand?
Does the more viscous palm oil decrease the extraction rate?
How is the extraction efficiency affected by the purity of the water?
Which are the important factors influencing the efficiency of chromium extraction?
1.2 Subproject 2: arsenic extraction
In previous studies at University of Malaya an EILM formulation was developed using the ionic liquid
[BMIM]+[NTf ]- as stabilizer in the membrane used for extraction of chromium with the help of the
2
carrier TOMAC [2]. This EILM process has been optimized, and the results for the formulation are used
in our study to investigate the suitability of extracting pentavalent arsenic using EILM. In previous
studies arsenic has been successfully extracted using hollow fibre supported liquid membrane (HFSLM)
with the mentioned carrier [5], but this technique requires a long extraction time (up to 24 h),
compared to EILM (less than 15 min). In this subproject the compatibility between arsenic and TOMAC
was first addressed through simple liquid-liquid extraction experiments in which suitable pH ranges of
the external phase were also identified. Extraction experiments were performed using the optimized
EILM formulation.
1.2.1 Purpose
The aim of subproject 2 is to examine the possibility of extracting pentavalent arsenic from water using
an EILM system similar to that used for extracting hexavalent chromium, with kerosene as diluent.
1.2.2 Issues
The following specific issues are considered:
Can the EILM formulation used in previous studies be applied for extraction of pentavalent
arsenic? What ranges of pH are needed?
Are there any improvements of the system needed?
Which are the important factors influencing the efficiency of arsenic extraction?
1.3 Limitations
The whole research part took place at the University of Malaya, Kuala Lumpur. The time for the
experimental part was limited wherefore investigations regarding de-emulsification and recovery of the
metals were not performed. The material and apparatus to be used was limited to what was available
within the time range and to what could be ordered and received during the start time of
experimentations.
2 |
Chalmers University of Technology | B ackground
2 BACKGROUND
2.1 Environmental and sustainability aspects
Environmental aspects are often connected to the concept of a sustainable development, which todays is
a common goal and sometimes a demand in the industrial sector; a wish for a sustainable society is
present in many countries. Green chemistry2 is also an important concept and is what we strive for in
this project. The sustainable development is described in the UN-Document “Our Common Future” from
19873 [6] and implies an interaction of ecological, economical and social aspects closely linked together,
since environmental issues are also issues of the society [7]. Human activities cause the environmental
problems, and human activities should also solve them, which in industry and research means that it is
beneficial to prioritize recycling, reuse and the use of environmentally friendly products that are
biodegradable and produced from renewable raw material in a way that does not harm the environment.
It also means that the environment should be kept free from toxic elements that can harm human health
and destroy the ecosystem, and it is therefore important to minimize harmful emissions to the air, the
soil and the waters.
This project focuses on the optimization of heavy metal extraction in order to reduce the toxic content
in wastewater effluents and to reduce the overall environmental impact through using more sustainable
components. The method used is emulsion liquid membranes (ELM), described in detail in Section 3.
One benefit of using ELM from an environmental point of view is the low energy demand compared to
pressure-driven membrane processes, another benefit is that the ELM can be prepared using relatively
simple materials and equipment, [2] enabling versatility and opportunity to make the system as
environmentally friendly as possible. The ELM process also allows the recovery of metals significant for
recycling and reduces in that way the amount of metals being disposed. Other traditional methods for
heavy metal removal are ion exchange, filtration and chemical precipitation that result in the disposal of
the metals on landfills, which prevents the recovery of the metals and may cause leaching of toxic
elements to the groundwater. These technologies also have issues of efficiency at low metal
concentrations, low metal selectivity and high start-up or high operating costs [8]. Metals at high
concentrations (>500 ppm) can be recovered with electrolysis, while at low concentrations (<5 ppm)
the metals can be removed by biosorption or ion exchange. At concentrations between 5 and 500 ppm
precipitation is possible, however it yields high volumes of sludge, with a low metal proportion [8].
ELM could be viewed as a development of the solvent extraction, or liquid ion exchange, which is well
established in wastewater remediation. However, solvent extraction method alone still cannot meet the
environmental standards for acceptable metal levels in discharged water and the method also requires
high initial concentration of metal [8]. ELM on the other hand can handle low concentrations of metal
and, if the process is optimized it may meet the environmental demands for the removal of the metals
from wastewater.
2.2 Heavy metals in the context of environmental and health concerns
Heavy metals are known for their toxic effects on animals and humans, as well as their negative effect in
the environment. In addition, anthropogenic activities such as industrial, agricultural and urbanisation
lead to the contamination by these toxic elements. The contamination of heavy metals in Southeast Asia
2 Green chemistry implies the design of chemicals and chemical processes that reduce or eliminate negative environmental
impacts such as reduced waste products, non-toxic components, and improved efficiency.
3 The report is also called the Brundtland report, and describes the concept as a “development that meets the needs of the present
without compromising the ability of future generations to meet their own needs”.
3 |
Chalmers University of Technology | Background
is a consequence of various industrial activities4 and the discharge of heavy metals in the environment
leads to the pollution of rivers, which in turn contaminate the ground water and the sediment system
[66].
2.2.1 Chromium
Chromium is quite abundant in the Earth’s crust, and is naturally occurring in rocks, animals, plants,
soil, volcano-dust and gases. Chromium occurs primarily in two valence states, trivalent chromium and
hexavalent chromium, which both exist naturally in water as solved salts, although Cr(VI) is more
soluble then Cr(III) compounds.
Figure 2.1: Cr(VI) dissolved in water has a yellow colour, the picture shows potassium dichromate in hydrochloric acid.
The metal does not exist naturally in its pure form but rather as chemical compounds, often with oxygen
and in its’ trivalent form [9]. The trivalent chromium is essential to humans5 and various other organisms
in small amounts, but becomes poisonous for most organisms in high concentrations [9]. The hexavalent
chromium on the other hand is highly poisonous, an oral dose of 2-5 g soluble of Cr(VI) can be fatal to
an adult human [10]. The target organ for acute and chronic inhalation exposure of hexavalent
chromium is the respiratory tract, several studies have shown that Cr(VI) increases the risk of lung
cancer [11], and if ingested Cr(VI) causes liver and kidney damage [10]. The body has ways of
detoxifying Cr(VI) by reducing it into Cr(III), although this will increase the level of Cr(III) in the body
[11]. As the oxidation state of chromium decides the toxicity, and the oxidation state depends on the pH
of the water and of the presence of reducing or oxidizing species, the water quality standards is based on
the total concentration of chromium. World Health Organization (WHO) has a provisional guideline
value of 0.05 ppm for the total chromium concentration in drinking water [12].
Important industrial sources of chromium waste include ferrochrome production, metal plating, steel
fabrication, paint and pigment production, wood treatment, manufacture of dyes, leather production
and tanning, and chromium milling and mining [10] [11]. Around 60% of the chromium produced is
used in chromium-based alloys, around 20% in chemical processes such as electroplating and most of the
rest is used in furnace bricks and other refractory products, and through leakage, poor storage or
improper disposal practices the chromium is released into the environment and into water supplies [10].
4 Examples are dye industries, leather tanning, mining and electroplating, however, poor implementation of laws also poses a
problem.
5 The major source of trivalent chromium is through food and a daily requirement of around 0.05 mg is recommended (the
absorption of Cr(III) is about 3% when ingested).
4 |
Chalmers University of Technology | Background
2.2.2 Arsenic
Arsenic can be found all over the world and is known for and often associated with its’ toxicity and usage
as poison in homicides throughout history. As an example, the cause of death of the Swedish king Erik
XIV in 1577 is believed to be arsenic poisoning. The mobilization of arsenic occurs by natural
weathering conditions, biological activity and volcanic emissions, and most environmental problems
related to arsenic are a consequence of natural mobilization [13].
A result of human activities such as mining, combustion of fossil
fuels, the use of herbicides and pesticides containing arsenic and the
use of arsenic additives to livestock is however a reason for additional
arsenic contamination and environmental impacts. The presence of
arsenic pollution affects the water resource qualities and the life of
millions of people worldwide. The WHO guideline states that
drinking water should not exceed the concentration of 0.01 ppm of
arsenic, although some countries including India, Bangladesh and
Argentina have adopted higher values as standard, and drinking water
poses the largest threat to public health on behalf of arsenic [14].
Lethal doses in humans range from 0.1-3.5 g arsenic (1.5-500 mg/kg
body weight), depending on the compound and oxidation state6 [15].
Long-term exposure to arsenic in drinking water causes pigmentation
Figure 2.2: Arsenic in known for its use
changes, skin thickening, nausea, muscular weakness and also various
as poison.
forms of cancer including skin, lung and kidney cancer, while acute
arsenic poisoning typically causes vomiting, abdominal pain and diarrhea [13]. Arsenic is the most
common cause of acute heavy metal poisoning among adults and one of the most toxic elements to be
found, and it is therefore extremely important to control and minimize the exposure of arsenic to
humans and to the environment. In Asia the arsenic problem is amplified by the pollution of rice puddles
leading to the uptake of arsenic in rice grains, which in Asia is the primary food source [16].
2.3 Environmental and sustainability concerns regarding the chemicals
involved
An ELM system is generally composed of internal reagent, organic diluent, surfactant, and carrier, and
in order to obtain a sustainable system, all these components should be relatively cheap and as
environmentally friendly as possible.
In previous studies kerosene has been commonly used as organic diluent, due to its’ low viscosity,
readily availability and non-polar character. Kerosene is a petroleum product, an organic liquid
produced from the refining of crude oil [17] and is the major component of aviation fuel, but is also used
as solvent, degreaser and domestic fuel. There are no natural sources of kerosene and release into the
environment should be avoided. If kerosene is inhaled while being ingested toxicity occurs, and it is
considered harmful and irritating to eyes and skin [18]. As kerosene is not considered environmentally
friendly, it is highly desirable to replace it for a renewable material, like a vegetable oil.
We have proposed palm oil as an alternative organic diluent, since it is a vegetable oil and it is
biodegradable. Palm oil is widely used in food and cosmetic industries, it is used as cooking oil in
Southeast Asia and Africa and as food additive in processed food worldwide. Another use of palm oil is
for the production of biofuels, such as biodiesel. The production of palm oil has grown rapidly the last
decades and was in 2010 around 45 million tonnes of which the main part comes from Malaysia and
6 Trivalent arsenic is more poisonous than the pentavalent form, and arsine (AsH) is considered most toxic while DMA
3
(dimethylarsinic acid) is the least toxic form.
5 |
Chalmers University of Technology | Background
Indonesia [19]. Nevertheless, palm oil is a controversial product; the large industry contributes to the
destruction of the rainforests in these countries and considerations on how it has been produced and
what consequences the production may have are of importance. The palm oil production is an important
economical income source for Malaysia and Indonesia, but bad practice in parts of the industry brings
high ecological and societal costs, such as fires to clear land for plantation and pressure on the species
that need the rainforest. A significant debate over the environmental impacts of the palm oil production
has occurred, regarding the diminishing of the rainforests as opposed to the efficient carbon assimilation
and high productivity [20]. However, the industry is improving, concern is increasing and, according to
the Roundtable on Sustainable Palm Oil (RSPO) in 2011, Malaysia is currently the world’s largest
producer of Certified Sustainable Palm Oil (CSPO) [21].
Figure 2.3: An oil palm tree cultivation plant in Malaysia.
The palm oil is produced from harvested fruits bunches of oil palm trees, and the trees are usually grown
in large cultivation plants, see Figure 2.3. The fruits are separated from their bunches, digested and
pressed to extract the palm oil [20] which is then fractionated into various portions with different
properties. Despite the controversy of palm oil production, palm oil may still be regarded as harmless to
health and environment compared to kerosene in terms of toxicity and biodegradability.
Span 80 is used as surfactant for the ELM formulation and Span is the commercial name for sorbitan
fatty acid esters, which are non-ionic surfactants. Span 80 is a sorbitan monoolete and classified as
environmentally friendly, as it is sugar based and produced from renewable sources and is also
biodegradable [22]. Tween 80 is the corresponding ethoxylated ester of Span, also classified as
environmentally friendly, and is used as stabilizer for the o/w interface for the w/o/w multiple
emulsions or as a co-surfactant for the palm oil based emulsion [22].
As a co-surfactant 1-butanol is used, a biodegradable substance that is mildly toxic to humans [23].
Butanol is produced mainly from propylene and thereby not entirely environmentally friendly. On the
other hand, ways of producing bio-butanol from fermentation of sewage sludge or sugar using bacteria,
in a way similar to the production of bio-ethanol are now under development [24]. The ionic liquids
used in the formulation, described in further detail in Section 3, are also considered environmentally
friendly.
6 |
Chalmers University of Technology | Survey of the field
3 SURVEY OF THE FIELD
3.1 Liquid membrane
Liquid membranes consist of three distinct phases, the feed phase, the membrane phase and the stripping
phase. The feed phase, also called the external phase, is the water containing the metal or the other
species to be extracted and the stripping phase, also called the internal phase, is where the metal will be
trapped. The different phases are defined for a simultaneous extraction and stripping to occur; the
separation is achieved when permeation occurs from the aqueous feed phase to the receiving stripping
phase.7 There are three different kinds of liquid membrane: bulk liquid membrane (BLM), supported
liquid membrane (SLM) and emulsion liquid membrane (ELM). Among these membranes, the double
emulsion in ELM achieves the highest mass transfer area, which is a desired property in separation
methods. Since the ELM system is the one used in this project, we will thoroughly and exclusively
describe this one.
3.2 Emulsion liquid membrane (ELM)
ELM processes are gaining importance among other conventional separation methods and since its
discovery by Norman Li for the separation of hydrocarbons [25] it has shown to be an easy way for the
removal of chemicals from wastewater. Compared to ELM, permeable and semi-permeable membranes
such as ultrafiltration, microfiltration and reversed osmosis have issues such as high capital cost, large
equipment size, low selectivity and low mass transfer rate. ELM offers some intensity features such as
larger interfacial area, high efficiency and simple operation methods. In terms of metal removal and
metal recovery from wastewater, the ELM technique has higher separation efficiency than conventional
methods [26]. Despite these advantages, ELM struggles with limitations in emulsion instability, breakage
of the membrane due to swelling during high shear rate and stress rate throughout the separation
process, which reduce the overall efficiency of the ELM processes. The ELM system consists of a double
emulsion: a water-in-oil (w/o) emulsion dispersed in an external aqueous phase. In the water-in-oil-in-
water (w/o/w) emulsion, the oil phase is the immiscible membrane phase, which separates the aqueous
phases and allows a selective transport of several components. See Figure 3.1 for a schematic picture of a
w/o/w multiple emulsion and representation of the phases.
Figure 3.1: Schematic picture of a water-in-oil-in-water emulsion and the phases in a multiple (w/o/w) emulsion. O=Oil
(Yellow) and W=Water (Gray for external phase and blue for internal phase)
7 In this report, the feed phase will further be referred to as the external phase and the stripping phase is referred to as the
internal phase. The ELM phase include both the membrane (organic) phase and the internal phase.
7 |
Chalmers University of Technology | Survey of the field
A simple emulsion is a type of heterogeneous mixture of two or more immiscible liquids, where one
liquid is dispersed in the other. An example of an emulsion is milk, which is fat dispersed in water [27].
The internal phase droplets are normally small, with a diameter in the order of 1-10 m and the
emulsion globules are generally larger, in the range of 0.1-0.2 mm in diameter [28].
3.2.1 Advantages of ELM
The system has a high interfacial area, 3000 m2/m3 for ELM compared to 100-200 m2/m3 for
SLM [29].
The diffusivity through most liquids is much higher than through polymer membranes, where a
very thin membrane must be developed to be able to compete with the high flux of ELM.
ELM provides high selectivity and high metal transfer flux due to the possibilities to incorporate
chemical components, which enhance the transport of the metal [26].
The extraction and the stripping coexist in the same stage, which gives savings in the equipment
volume.
The overall mass transfer is not only dependent on equilibrium consideration, but also
controlled by a combination of diffusion rate and the reaction rate of the extractant and the
metal complex.
The volume of the internal phase is much smaller than the volume of the external phase that
enables metal concentration in the internal phase.
3.2.2 Disadvantages of ELM
The ELM process struggles with instability of the emulsion globules, which is mainly influenced by
osmotic swelling and globule breakage. The osmotic swelling occurs when the water in the
external phase diffuses through the membrane phase and swells the internal droplet, causing
dilution of the content in the internal phase. Breakage of the globules mainly occurs due to the
interfacial shear between the external phase and the membrane phase.
The process is often problematic in terms of the de-emulsification, which involves the recovery of
the membrane phase and the metal. The most commonly used method is high voltage
electrostatic fields, which is an energy demanding process.
3.3 Mechanism of ELM mass transport
The permeation of metals through the membrane in the ELM process occurs naturally by diffusion and
various components can be used to enhance the separation such as additives, chemical reagents or
specific carriers. Ways of improving the effectiveness of the separation are by maximizing the flux
through the membrane and the capacity of the diffusion, where two related mechanisms are being
known as Type 1 facilitation and Type 2 facilitation.
In the case of Type 1 facilitation a stripping agent is incorporated in the internal phase to increase the
mass transfer. The stripping agent will react with the solute, resulting in a membrane insoluble product.
The mechanism usually used for recovery of heavy metal and the mechanism considered in this project is
Type 2 facilitation, or carrier-facilitated transport. In addition to the incorporated stripping agent in
Type 1 facilitation, a carrier or a reactive component is also incorporated in the membrane phase to
enhance the metal-transport. This mechanism is schematically described in Figure 3.2. The carrier forms
a membrane-internal compound (for example [NR +OH-] if NaOH is used as stripping agent) that is
m
only soluble in the membrane phase, allowing diffusion through the membrane phase to the membrane-
external interface. A reversible reaction with the metal complex ([MX]n-) to be transported occurs at the
membrane-external interface [25]. The formed carrier-metal complex ([NR MX]) diffuses through the
m
membrane to the membrane-internal interface and dissociates, thus releasing the metal in the internal
phase. The carrier diffuses back to the membrane-external interface to repeatedly react with another
metal complex from the external phase. This makes it possible for the carrier to be regenerated and
8 |
Chalmers University of Technology | Survey of the field
transport the metal many times, achieving a high degree of separation. When the metal is insoluble in
the membrane phase and the only way the metal can by transported is by the formation of a carrier-
metal complex, the concentration gradient is maximized by the reaction with a stripping agent at the
membrane-internal interface.
Each step in Type 2 facilitation transport can be summarized as follows:
1. Reaction of the carrier and metal ion occurs at the interface of the external and the membrane
phase.
2. The formed carrier-metal complex diffuses across the membrane phase to the internal-
membrane interface.
3. The metal ion is released in the internal-membrane interface and the carrier is regenerated.
4. The metal ion diffuses from the internal-membrane interface to the bulk internal phase.
5. Carrier is returned across the membrane (mass transfer of extractant in the membrane phase
from the internal-membrane interface to the external-membrane interface)
The ion flux through the membrane is created by a difference in chemical potential, which is due to the
different pH between the two aqueous phases.
Figure 3.2: Transport mechanism in ELM process. A): a w/o emulsion droplet dispersed in the aqueous external phase, B):
schematic picture of the reactions occurring at the interfaces.
3.4 Operational aspects of ELM
The different steps encountered in an ELM process are described as follows and also shown in Figure 3.3
1. Emulsification of the membrane and internal phase
2. Emulsion-external phase contacting
3. Separation of the emulsion and external phase after extraction
4. De-emulsification and recovery of the metal and the membrane phase
9 |
Chalmers University of Technology | Survey of the field
Figure 3.4: Schematic picture of the hydrophilic (“head”) and the lipophilic (“tail”) of a surfactant and a co-surfactant packed
between the surfactants.
Bancroft’s rule states that water-soluble emulsifiers tend to give o/w emulsions and oil-soluble
emulsifiers tend to give w/o emulsions. The concept of hydrophilic-lipophilic balance (HLB) may be
used for a more quantitative approach when assigning the composition of a formulation, also utilized in
this study to estimate the degree to which the surfactant is hydrophilic or lipophilic and to choose
suitable surfactants for the multiple emulsion creation. A surfactant with HLB values in the range of 1-10
is more soluble in oil than in water, and those in the range 10-20 are more soluble in water than in oil.
In Table 3.1 some common HLB values are given. It has also been found that the combination of two
surfactants, one more hydrophobic and one more hydrophilic, is superior to the use of a single surfactant
when making a stable emulsion and it contributes to a better packing of the surfactants in the oil-water
interface, as the emulsifiers will have different critical packing parameters (CPP)8 [31].
Table 3.1: Common HLB value ranges and their applications [30].
HLB Applications
1-1.3 Antifoams
3.5-8 Water-in-oil emulsifiers
7-9 Wetting and spreading agents
8-16 Oil-in-water emulsifiers
13-16 Detergents
15-40 Solubilizers
When creating a mixture, the total HLB will be calculated using x % of surfactant with HLB A and y %
surfactant with HLB B by using Equation 3.1 [31].
HLB(A+B) = (Ax+By)/(x+y) Equation 3.1
Multiple emulsion systems usually require at least two surfactants to create a stable emulsion: one
lipophilic with a low HLB to stabilize the w/o interface and one hydrophilic with a high HLB for the
o/w interface. The two emulsifiers are in interaction at the interfaces, therefore the chemical
composition and compatibility of the emulsifiers is important. When creating a complex w/o/w
emulsion the process is normally divided into two steps. In step one the aqueous internal phase is poured
slowly into a beaker containing the oil phase, the lipophilic surfactant and other additives required and a
8 CPP is defined for a surfactant as the ratio v/(l a) where v is the effective volume of the hydrophobic tail, l is the
max max
extended length of the alkyl chain (the tail) and a is the cross-sectional area of the head group.
11 |
Chalmers University of Technology | Survey of the field
high speed impeller, or a homogenizer, is used to disperse the aqueous internal droplets into the oil
phase, and this results in a w/o emulsion. In the second step the created w/o emulsion is poured into
the beaker containing aqueous external phase and a hydrophilic surfactant while agitated to disperse the
w/o emulsion into the aqueous external phase. These steps and the procedure for creating a stable
multiple emulsion is shown in Figure 3.5.
Figure 3.5: Creating water-in-oil (w/o) emulsion followed by a water-in-oil-water (w/o/w) emulsion.
However, in this project the methodology is modified in the case of creating the w/o/w emulsion. The
created w/o/w emulsion should be stable enough to ensure a high contact surface area between the
ELM phase and the external phase during the extraction. It should simultaneously be instable for a quick
phase separation to occur when the extraction has been performed (when the agitation is turned off)
where a quick recovery of the purified water is required before the breaking of the w/o emulsion.
Because of this the hydrophilic surfactant is added in the first step together with the lipophilic surfactant
intending that some of hydrophilic surfactant may migrate to the o/w interface of the multiple emulsion
and facilitate the second emulsification. The chosen surfactants in this research are the commercially
available Span 80 and Tween 80, both being viscous liquids at room temperature. The nonionic
surfactant sorbitan fatty acid esters (commercial name Span) and the corresponding polysorbate,
polyoxyethylene (POE) sorbitan fatty acid ester (commercial name Tween) are often used to stabilize
multiple emulsions of w/o/w [32]. See Figure 3.6 for the structural formula of Span 80 and Tween 80
and the geometrical packing structure in an o/w emulsion.
Figure 3.6: A): the structural formula of Span 80 (sorbitan monooleate, HLB ≈ 4.3) and Tween 80 (ethoxylated sorbitan
monooleate, HLB ≈ 15). B): the geometrical packing of the surfactants at the oil-water interface in dispersed oil droplets.
12 |
Chalmers University of Technology | Survey of the field
The Spans are mixtures of partial esters of sorbitol and mono- and di-anhydrides with oleic acid,
generally insoluble in water, corresponding to the lower HLB value. They are commonly used as water-
in-oil emulsifiers and wetting agents [33]. The polysorbates (Tweens) are a complex mixture of sorbitol
esters and mono- and di-anhydrides condensed with ethyleneoxides, resulting in a larger and more polar
head group, hence a higher solubility in water. This is reflected in their higher HLB value, and they are
commonly used as emulsifiers for oil-in-water emulsions [33]. The numbers in the commercial names
denote the kind of hydrophobic groups present in the compound, and 80 represents oleate.
Multiple emulsions are limited by instability, with a consequent reduction of the overall removal
efficiency in the ELM process. The instability is mainly due to the inherent thermodynamic instability
and the complexities of their structure [34]. One limitation arises due to the immiscibility of the
dispersed and continuous phase, where the dispersed phase breaks into droplets and the free energy of
the surface increases. The increase of interfacial free energy causes thermodynamic instability of the
dispersed phase, which leads to a droplet coalescence [31]. Another factor that affects stability is the
osmotic pressure. If the external osmotic pressure is higher than in the internal aqueous phase, there will
be water passing through the membrane phase leading to a swelling and eventually a rupture of the
internal droplets, resulting in a leakage of the content into the external phase. Consequently, if the
osmotic pressure is lower in the internal phase water will pass from the internal phase to the external
phase resulting in shrinkage of the internal droplets. Ways of measuring the emulsion stability are
limited because the stability of the internal droplets and external droplets must be determined. One
direct way to examine the multiple droplets is by using microscopy [35]. In this project, due to the
limited time, no such measurements were made.
3.4.2 Ionic liquid
Room temperature ionic liquids (RTILs) are by definition salts having a melting point lower than 100°C,
thus in the liquid state at room temperature. The main properties of RTILs are that they have negligible
vapor pressure, wide window of electrochemical stability, thermal stability at high temperature,
excellent chemical stability and high ionic mobility [36]. These properties make them suitable replacers
for volatile organic solvents in several chemical reactions [37]. However, the role of ionic liquid used as a
stabilizer, carrier or surfactant in ELM is sparsely documented [26].
3.4.2.1 Stabilizer
Goyal et al showed that the stability of a w/o emulsion with kerosene as diluent was improved by
incorporating the ionic liquid 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide
([BMIM]+[NTf ]- ) in the membrane phase as a stabilizer. Goyal et al showed that by the addition of 3
2
wt% [BMIM]+[NTf ]- the stability of the w/o emulsion could enhanced from a few minutes up to 7 h
2
[26]. [BMIM]+[NTf ]- will therefore be used in subproject 2 and has been chosen due to its low viscosity
2
(52 mPas) compared to other ionic liquids, which facilitates the homogenous dispersion in EILM. It is
also hydrophobic, has a low toxicity and a low density. [BMIM]+[NTf ]- is a room temperature ionic
2
liquid characterized by its melting point of 4˚C and the molecular structure can be seen in Figure 3.7.
Figure 3.7: Molecular structure of the ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([BMIM]+
[NTf]- ).
2
13 |
Chalmers University of Technology | Survey of the field
3.4.2.2 Carrier
The carrier, also known as extractant agent, is present in the membrane phase and is used to facilitate
the metal-transport through the membrane. The chemical behaviour of the extractant is broadly
classified into the three following categories [1]:
Acidic: this category includes for example organophosphinic acids (i.e. Cyanex 272, DTPA) and
organophosphonic acids (i.e. PCA 88A, Ionquest 801)
Basic or anionic exchangers: quaternary ammonium salts (i.e. Aliquat 336) and tertiary amines
(i.e. TOA, TNOA, Alamine 336) are included in this category and the extraction depends on
the ability of the metal ion to form anionic species in the external phase. The metal is extracted
as an ion pair by the amine salt.
Solvating extractants: these carriers are used to compete with water as the first solvation shell
around the metal ion. This facilitates the transfer of the metal ion complex into the membrane
phase. Commercially used solvating extractants include phosphine oxides (i.e. TOPO, Cynaex
923) and phosphorous esters (i.e. TBP).
Important properties of the carrier that affect the overall removal efficiency are viscosity, density,
solubility in the organic phase and insolubility in the aqueous phases. The carrier chosen for this research
is a quaternary ammonium salt called tri-n-octylmethylammonium chloride (TOMAC or commercial
name Aliquat 336) with a melting point of -20°C and viscosity of 500 mPa·s at 30°C. As seen in Figure
3.8, TOMAC contains an electron deficient nitrogen group and a mobile chloride counter-ion, which
contributes to a so-called anion displacement reaction between the carrier and the metal ion. This
reaction is relatively fast in comparison to other complex formations i.e. ligand formation, this due to
the presence of strong electrostatic interactions.
Figure 3.8: Molecular formula of tri-n-octylmethylammonium chloride (TOMAC)
3.4.3 Diluent
The diluent has an important function in the ELM process, since it is the major constituent of the
membrane phase and the stability of the membrane is a vital factor for an effective metal-transport. A
higher viscosity of the diluent can generally increase the emulsion stability (Shere and Cheung noted that
emulsions with high viscosity oils are generally more stable) [38], but a high viscosity can also decrease
the mass transport due to a higher resistance to diffusivity. Regarding solvent extraction a lower
viscosity of the diluent benefits the overall capacity due to the decreased mass-transport resistance [39]
and this is believed to be the case also for the ELM process. High enough density is necessary for an
easier settling of the liquid phases, and for the phase separation between the external phase and the ELM
phase, a high difference in density is beneficial. Low solubility in water is needed because the interaction
with water breaks down the emulsion [40]. When it comes to the industrial use of the ELM process, the
diluent stands for the largest amount wherefore other properties should also be considered such as
14 |
Chalmers University of Technology | Survey of the field
corrosivity (which increases the equipment cost or might require pre-treatments), easy recoverability,
thermal and chemical stability and recyclability.
3.4.3.1 Palm oil as diluent
Venkateswaran et al studied several vegetable oils as diluents for the extraction of phenol in liquid
membranes and palm oil was chosen when considering the removal efficiency, with a permeability of
8.5*10-6 m/s in acidic feed of pH 2.0 [4]. Very few previous studies where found using palm oil or any
other vegetable oil as a diluent in the ELM process, which is the main purpose in subproject 1. As palm
oil is easily available in Malaysia9 to a low cost it is a suitable replacer for the common petroleum based
diluents such as kerosene, toluene, heptane and n-dodecane. This research uses cooking oil from the
supermarket, which is a fraction of refined bleached deodorized palm oil called palm olein and consists
mostly of unsaturated fatty acids [41]. Crude palm oil consists mainly of triglycerides, see Figure 3.9 for
the molecular structure, but also of small amounts of monoglycerides and diglycerides. The fatty acid
chain in palm oil triglycerides varies in the number of carbons and in structure, which also defines the
chemical and physical properties [42]. The chain length of the fatty acids is between 12 to 20 carbons,
half of the fatty acids are saturated (0,1% laurate, 1% myristate, 44% palmitate, 5% stearate) and the
other half is unsaturated (39% monounsaturated oleate, 10% polyunsaturated linoleate, 0.3 %
polyunsaturated alpha-linolenate). The degree of saturation determines the stability of the oil against
oxidation. Palm oil has a density of 887.5 kg/m3 [43] and a viscosity of 130 mPa·s at 20 ˚C [4]. Random
analyses of samples of palm olein have shown the presence of about 2% of 1,2-diglycerides, about 4% of
1,3-diglycerides and trace amounts of monoglycerides and other components [41]. The commercially
used cooking oils are commonly enriched with vitamins, nutrients and flavours.
Figure 3.9 The molecular structure of saturated triglyceride and glycerol. [44]
3.4.3.2 Kerosene as diluent
One of the most commonly used diluents in ELM systems, and also the diluent used in subproject 2, is
kerosene (also called paraffin), a thin clear liquid mixture of hydrocarbons with a viscosity of 1.64 mPa·s
at 27°C [41] and a density of 0.78-0.81g/cm3. Kerosene is obtained through fractional distillation of
petroleum between 150 and 275°C and its chemical composition depends on its source, but usually
consists of 10 different hydrocarbons each containing 10-16 carbon atoms per molecule with the general
formula C H ; see Figure 3.10 for the structure of a kerosene constituent with n=12. The main
n 2n+2
constituents of kerosene are straight chain and branched chain paraffins and also ring shaped
cycloparaffins (naphtenes) [45]. Reasons for using kerosene is the easy availability in Malaysia for a low
cost due to the subsidized price [46] and it has also been reported to form a more stable emulsion
compared to toluene and n-dodecane [40].
9 Malaysia is, after Indonesia, the world’s second largest producer of palm oil.
15 |
Chalmers University of Technology | Survey of the field
Figure 3.10:The molecular structure of branched chain kerosene.
3.4.4 Stripping agents
The purpose of the stripping agent is to react with the metal ion in the internal phase through a stripping
reaction. This reaction converts the metal ion into a membrane insoluble compound hence trapping the
metal in the internal phase droplets. It also enables transport against the metal concentration gradient.
The stripping agent is incorporated in the internal phase and can be an acid or a base, depending on the
specie to be extracted. As an example, NaOH can be used as stripping agent for the chromium removal
from wastewater [26].
3.4.5 De-emulsification
The metal and the membrane phase is recovered during the de-emulsification step, where the breaking
of the w/o emulsion occurs. There are two types of de-emulsification methods: physical and chemical
ones [47]. Chemical methods include the addition of a de-emulsifier, which is the easiest way but limits
the reuse of the component due to changes in the properties of the diluent, surfactant and carrier.
Physical methods include heating, centrifugation, microwave radiation, high shear and solvent
dissolution. The most commonly used de-emulsification technique is the use of electrostatic fields.
However, this part is not included in the scope of this project, hence it will not be treated further.
3.5 Conditions affecting extraction rate and permeability
Various operating conditions affect the extraction rate and the permeability, including the membrane
formulation, the stripping agent concentration, the stirring rate and the external phase conditions.
Phenomena that are affected by these parameters are swelling and membrane breakage. As mentioned
previously one of the disadvantages of ELM systems is the tendency of swelling of the emulsion globules.
Two types of swelling exist: osmotic swelling and entrainment swelling. Osmotic swelling occurs as a
result of a large difference in osmotic pressure between the internal and the external phase, causing a
transfer of water from the external phase into the internal phase. Entrainment swelling is caused by the
entrainment of the external phase into the internal phase through repeated coalescence and re-dispersion
of emulsion globules during the dispersion procedure causing an increase in the volume of the internal
phase. However, osmotic swelling cannot be differentiated from entrainment swelling and it is difficult
to determine both the swelling and the breakage phenomena in the same experiment [1]. There are
several proposed mechanisms to explain ELM globule swelling. The most probable mechanism is
molecular diffusion of water from the external phase to the internal phase and water transfer via
hydration of the surfactant molecules. Two other mechanisms proposed are micelle-assisted transport of
water from the external phase to the internal phase and entrainment with a subsequent emulsification of
the external phase caused by an excess of surfactant. Through general observations, several factors have
been suggested to influence the rate of swelling such as the type and concentration of the surfactant, the
stirring speed, the organic to internal phase ratio and the background electrolyte concentration [1].
16 |
Chalmers University of Technology | Survey of the field
3.5.1 Membrane formulation
The membrane phase consists of diluent, carrier, surfactant and co-surfactant, and requires an optimal
formulation for the emulsion to be stable and for the extraction to take place. The surfactant
concentration has an important role in the stability of the w/o emulsion where a higher surfactant
concentration results in improved stability due to the lower surface tension, which in turn leads to a
smaller droplets size and a larger mass transfer area. However, larger amounts of surfactant increase the
viscosity of the membrane phase and decrease the removal efficiency due to lower diffusivity of the
metal through the membrane phase [26] hence an optimum surfactant concentration is needed. Goyal et
al showed that up to 3 wt% concentration (relative to the membrane phase) of Span 80 increases the
removal efficiency in the chromium(VI) extraction [26]. A higher concentration of Span 80 increases the
mass transfer resistance, leads to formation of micelles that result in membrane swelling but also makes
the de-emulsification and metal recovery more difficult. Regarding the carrier concentration Goyal et al
showed that a decrease in extraction rate occurred beyond a certain concentration (0.3 wt%) of the
carrier [26]. These results motivates for the chosen surfactant and carrier concentrations in this project.
3.5.2 Stirring rate
The stirring rate has a large impact on the ELM extraction capacity, since it enhances the mixing during
extraction and provides smaller emulsion droplets due to the shear force applied on the emulsion
globules, providing a larger mass transfer area. However, a further increase in stirring speed may lead to
a decrease in emulsion stability and leakage of the internal phase due to the breakage of emulsion
droplets. When mixing the external phase and ELM phase the commonly used stirring rate is 100-800
rpm. The homogenization speed for the creation of the ELM phase is often performed at 3000-10 000
rpm [26].
3.5.3 Internal stripping agent concentration
The stripping agent concentration has an important role when it comes to the extraction rate. A higher
concentration increases the metal extraction rate both due to the stronger pH gradient and the higher
amount of stripping agent present. As mentioned earlier, the pH difference between the external phase
and the internal phase is the main driving force for the transport of the carrier-metal complex through
the membrane phase. Goyal et al showed that an optimal stripping agent concentration exists and a
further increase has a negative influence on the removal efficiency [26]. Furthermore, an increase of the
internal concentration gives a higher pH difference between the external phase and the internal phase,
which may increase the osmotic pressure and cause membrane swelling.
3.5.4 Metal concentration of the external phase
The metal concentration in the external phase influences both the extraction rate and efficiency, which
depend on the capacity of the internal phase to strip the metal. High initial metal concentration requires
a high emulsion capacity and a low initial metal concentration means that the metal ions may have to
compete with other ions present in the external phase.
3.5.5 pH of the external phase
In order to accomplish the extraction of diluted metals from water, the pH of the external phase has to
be precisely controlled. Moreover, the chemistry of the different metal complexes in the external phase
influence the carrier-metal transport, which can be controlled by choosing the proper pH of the external
phase.
3.5.5.1 Chemistry of chromium
Hexavalent chromium ions exist in different forms in the aqueous phase depending on the pH (the
chromate and the dichromate ions H CrO , H Cr O , HCrO −, HCr O − , CrO 2− and Cr O 2−) [48]. For
2 4 2 2 7 4 2 7 4 2 7
slightly acidic or basic pH the CrO
2−
ion is the dominating form, an increase in the concentration of
4
[H+] leads to a reaction with CrO 2− to form HCrO − and upon further increase H CrO is formed.
4 4 2 4
17 |
Chalmers University of Technology | Survey of the field
Figure 3.11 shows the abundance of the chromate ions depending on the pH of the external phase. Due
to the basic properties of TOMAC, the target complex in this case requires an anionic chromium
complex, and previous studies with successful chromium extraction have used pH low as 0.5 [26].
Figure 3.11: Abundance of chromium(VI) ions in water (reproduced from [49] with permission from the author’s)
The reactions involved in the chromium extraction by ELM include the carrier reacting with the
stripping agent and the metal complex.
The carrier diffuses through the membrane to the membrane-internal interface where it reacts with the
stripping agent, as shown in Equation 3.2. This reaction yields chloride ions present in the internal
phase, which also help to strip the metal complex [26].
Equation 3.2
There are two types of carriers present in the membrane phase that react with the metal complex,
TOMAC (NR
+Cl−
) and TOMAOH (NR
+OH−
). The following Equation 3.3 and Equation 3.4 shows
4 4
the anionic displacement reaction with the two types of carriers and one of chromium anionic complex,
−
HCrO .
4
( ) Equation 3.3
( ) Equation 3.4
The formed carrier-metal complex diffuses across the membrane phase to the membrane-internal
interface, where the stripping reaction occurs and the metal is dissociated to the internal phase as shown
in Equation 3.5. The created complex HCrO − Na+ is insoluble in the membrane phase and will
4
therefore not diffuse back to the external phase, but will instead be trapped within the internal droplet
[48].
( )
Equation 3.5
−
The dissociated HCrO ion in the internal phase will remain in equilibrium after the reaction with the
4
hydroxide ions as shown in Equation 3.6.
18 |
Chalmers University of Technology | Survey of the field
Equation 3.6
As the stripping reaction proceeds and hydroxide ions are released in the external phase, the pH
increases due to exchange of the hydroxide ions with the metal complex. As the pH changes in the
external phase, an increased amount of CrO 2− ions will be present which consequently react slowly
4
with TOMAC and TOMAOH. Each CrO 2− species requires two extractant species for the reaction with
4
the carrier to occur, resulting in a decreased reaction rate with time [26]. The pH of the external phase
can be adjusted with different kinds of acids such as HNO , HCl, H SO . It is suggested in previous
3 2 4
studies that the adjustment of the pH with HCl for the removal of chromium maintained longer
membrane stability than with HNO and H SO [50].
3 2 4
3.5.5.2 Chemistry of arsenic
Arsenic exits in the oxidation states -3, 0, +3 and +5 [13]. The dominating species in ground water are
arsenite (AsO
3−
, arsenic(III) ion) and arsenate (AsO
3−
, arsenic(V) ion). The presence dissociated or un-
3 4
dissociated arsenic complexes depend on the pH of the water, as given in Figure 3.12. It can be seen that
−
arsenic(V) is found as different neutral and ionic complexes in different pH ranges (H AsO , H AsO ,
3 4 2 4
HAsO
2−
, AsO
3−
). The most common pH range in ground water is 6.7-8.8, where H AsO
−
and
4 4 2 4
HAsO
2−
are dominant [51].
4
Figure 3.12: Molar fraction of arsenic(V) complex H AsO , H AsO −, HAsO 2− and AsO3− for different pH ranges (reprinted
3 4 2 4 4 4
from [52] with permission from the author’s)
The dissociation of arsenic(V) with the value of the logarithmic acid dissociation constant (pK) is
a
described below
H AsO
→
H AsO
− →
HAsO
2− →
AsO
3−
3 4 2 4 4 4
A suitable carrier can be chosen taking into consideration the form of the metal complex to be
extracted. As mentioned previously the basic carrier TOMAC is used in this project and an anionic
arsenic complex is necessary for the creation of the carrier-arsenic complex and the pH of the external
phase chosen to facilitate the reaction. The pH of the external phase should be adjusted with a base to
ensure a high pH where the anionic arsenic species are present.
19 |
Chalmers University of Technology | Survey of the field
3.5.6 Treat ratio
The treat ratio in this study is the volume ratio of the external phase to the ELM phase. It also measures
the volume of ELM required per unit volume of the external phase, as shown in
Equation 3.7.
( ) Equation 3.7
( )
This ratio defines the effectiveness and the economy of the ELM process because a smaller volume of the
membrane phase (a high F/ELM) reduces the overall cost. Goyal et al have discussed that an increase of
the treat ratio increases the possibility of swelling and breakage of the emulsion but also that a reduction
of internal phase volume results in decreased stripping [26]. A lower treat ratio increases the extraction
rate due to the presence of a larger ratio of membrane and internal phase, and increases the capacity of
extraction. The optimal treat ratio of 2 was found to be most efficient.
3.5.7 Organic to internal phase ratio (O/I)
The organic to internal phase ratio describes the weight ratio of the organic phase to the internal phase,
as shown in Equation 3.8.
( ) Equation 3.8
( )
This ratio is important to control in order to achieve optimal emulsion stability where phase inversion (a
change from w/o to o/w) depends on the relative volume of the different phases but also on the HLB
values of the surfactants and on the temperature [26]. A decrease of the organic fraction relative to the
internal phase causes an increase of the amount of stripping molecules and increases the stripping rate at
the internal to organic interface.
20 |
Chalmers University of Technology | Methodology and experimental techniques
4 METHODOLOGY AND EXPERIMENTAL TECHNIQUES
In this project confirmation and optimization of chromium and of arsenic extraction using an ELM
system was investigated, and the discovery of the possibility of using a vegetable oil as organic diluent in
the system was explored. A large number of experiments were required,10 and this section starts with a
general description of the experiments performed followed by a more detailed description of the
experiments carried out in the two subprojects.
4.1 Simple liquid-liquid extraction
As the ELM extraction process is of Type 2 facilitation, in which a mobile carrier is incorporated in the
liquid membrane, the compatibility of the carrier and the current metal-complex must be confirmed.
For this purpose a simple liquid-liquid extraction (or solvent extraction) is a fast and straightforward
way to verify and ensure the compatibility. It can also be used to determine the pH range where the
extraction is most efficient. The verification is performed as followed (see Figure 4.1 for a schematic
picture):
1. The aqueous phase is prepared with a known metal concentration and pH is adjusted
2. The organic diluent (solvent) is mixed with the carrier in a beaker, using an agitator stirred
by a straight blade impeller, with the carrier in molar proportion to the metal
3. The external phase is poured into the organic phase while stirring and the system is left for a
certain time at a constant agitation speed
4. The agitation is turned off, the aqueous and organic phase are allowed to separate and
samples are taken from the aqueous phase for concentration measurements with ICP-OES
(See Section 4.2)
Figure 4.1: Schematic picture of the simple liquid-liquid extraction process.
4.2 Concentration measurements: analysis of removal efficiency
To measure the extraction efficiency, either from the simple liquid-liquid extraction or from the ELM
extraction process, samples from the external phase are analysed using inductively coupled plasma
optical emission spectroscopy (ICP-OES). The initial concentration of the batch external phase is
measured simultaneously to obtain the accurate amount of removal. The removal efficiency is calculated
according to Equation 4.1, where c is the measured initial external phase concentration (mg/L) and c
i e
is the measured concentration (mg/L) of the metal in the sample taken as a function of time.
10 All of the experiments were performed at the University of Malaya, Faculty of Engineering: Department of Chemical
Engineering
21 |
Chalmers University of Technology | Methodology and experimental techniques
( ) Equation 4.1
4.2.1 Inductively coupled plasma optical emission spectroscopy (ICP-OES)
Inductively coupled plasma (ICP-OES and ICP-MS) spectrometry and atomic absorption
spectrophotometry (AA) are the most widely used analytical methods used for determining trace
elements. However, Saravanan et al used a UV Jasco spectrophotometer for the detection of chromium
[53]. The device used in this project was an Optima 7000 DV ICP-OES from PerkinElmer. The device
has a dual-view design and a detection limit in the range of parts per billions. The basic principle ICP-
OES consists of the excitation of elements, the detection of the characteristic wavelength of the emitted
light (arsenic at 193.696 nm and chromium at 267.7 nm) and the measurement of its intensity to obtain
the concentration of the element. More than one element can be analysed simultaneously and the
analysis is relatively quick, one sample is analysed in 1-3 minutes, depending on the washing time and
the number of measurements per sample. The device used can be seen in Figure 4.2.
Figure 4.2: The Optima 7000 DV ICP-OES used for concentration measurements.
Plasma that contains sufficient concentration of ions and electrons to make the gas electrically
conductive is referred to as inductively coupled plasma. The plasma is created from a flow of argon gas
through a torch that contains a Tesla unit, which creates a high voltage, low current, and high frequency
alternating current electricity. The formation of plasma takes place through adequate electromagnetic
field strength, introducing electrons into the gas stream and causing them to collide with argon atoms.
Once the plasma is ignited, the Tesla unit is turned off. The inductively coupled plasma is used to excite
atoms and ions and cause them to emit electromagnetic radiation at wavelengths characteristic for that
particular element, and the intensity of the radiation is indicative to the concentration of the element.
For this a calibration curve is established of the current element from samples of known concentration,
1, 5, 15, 30, 70 and 100 ppm of As(V) ions or Cr(VI) ions respectively. The calibration curves obtained
of chromium and of arsenic can be seen in Figure 4.3.
22 |
Chalmers University of Technology | Methodology and experimental techniques
200000 y = 1908.9x - 708.6 40000000 y = 37566x + 10542
180000
R² = 0.9984 35000000 R² = 0.999
160000
30000000
140000
y t is
n
11 02 00 00 00 00 y t i s
n
22 05 00 00 00 00 00 00
e e
t n 80000 t n 15000000
I I
60000
10000000
40000
5000000
20000
0 0
0 50 100 0 50 100
Arsenic concentration (mg/L) Chromium concentration (mg/L)
Figure 4.3: Calibration curves obtained from the ICP measurements for arsenic and chromium.
The measured intensity (y) of a sample with unknown concentration (x) is compared with the
corresponding calibration curve, hence the metal concentration in the sample is obtained.
A sample size of at least 5 ml is required for reliable measurements and the device is controlled with
WinLab32 software. Every sample was analysed three times whereby a corrected mean intensity was
obtained and used for the determination of the sample concentration. The results obtained from the
measurements contain both the intensities measured and the concentrations obtained.
4.3 Emulsification: creating the membrane and the internal phase
4.3.1 Preparation of a w/o emulsion
The emulsion type needed for the ELM system is a water-in-oil emulsion, in which the aqueous phase is
dispersed in the organic oil phase. First the solubility of the different components in the oil is checked,
and then the proper amounts of surfactant, eventual co-surfactants and/or stabilizer, carrier and oil
composing the organic phase are weighed to the correct mass ratios and mixed using homogenizer. The
aqueous solution (containing the stripping agent) is slowly added to the mixture using a pipette while
still homogenizing. When all of the internal solution has been added, the homogenizer is kept on for the
decided emulsification time and the final solution is checked to be macroscopically homogenous.
4.3.2 Exploring the stability of an emulsion
The stability and viscosity of a novel emulsion formulation was studied and the composition identified as
the best possible could be determined for further investigation. If phase separation was observed or if the
original state of the emulsion could not be regained upon slight application of shear stress the emulsion
was considered destabilized.
Brief methodology:
1. Calculation of an approximate composition of the formulation using the HLB concept and
comparison with earlier studies,
2. Preparation of a number of emulsion formulations while varying the composition, emulsification
time and homogenization speed in a systematic manner,
3. Study of stability and viscosity,
4. Summary of results and choice of formulation for further studies.
The emulsion created in step 2 above is transferred to a marked separation funnel and in order to wait
for phase separation to occur. The emulsion is left in ambient conditions and stability is checked within
regular time intervals.
23 |
Chalmers University of Technology | Methodology and experimental techniques
4.5 Subproject 1: using palm oil as diluent
4.5.1 Emulsion stability studies
Before the metal extraction could be investigated, a novel emulsion formulation with palm oil as diluent
suitable for the metal extraction needed to be developed. A relatively low viscosity and a stability time
of at least one hour were desired. The parameters investigated can be seen in Table 4.1, and a number of
different emulsions were prepared while varying these parameters.
Table 4.1: Parameters investigated in the emulsion stability studies.
Parameter Range
O/I phase mass ratio 2 to 3
Emulsification time 6 to 15 min
Homogenization speed 3200 to 7000 rpm
Span 80 2 to 4 wt%
Tween 80 0 to 1.5 wt%
[BMIM]+[NTf2]- 0 to 3 wt%
Butanol 0 to 3 wt%
The stability was studied by observing the phase separation in a separation funnel and the viscosity was
only estimated by the naked eye inspection of the emulsion. From previous studies it is known that Span
80 works well as a surfactant in the ELM system [26]. To facilitate both the emulsification of the w/o
emulsion and the creation of the double w/o/w emulsion for the extraction step, the use of the
hydrophilic Tween 80 as a co-surfactant was investigated, while to improve the overall stability of the
emulsion the use of butanol as co-surfactant was investigated. In addition to the palm oil based
emulsions, some emulsions were prepared using kerosene as organic diluent with Tween 80 and butanol
as co-surfactants in order to evaluate the effect of these.
4.5.2 Metal extraction experiments
In previous studies the compatibility of TOMAC and chromium has been ensured and the suitable pH
range for extraction is known to be 0.5. Consequently, the metal extraction experiments conducted in
this subproject directly assess the whole ELM process.
The external phase was prepared by solving 0.283 g of K Cr O in 1 L water, obtaining a chromium ion
2 2 7
concentration of approximately 100 ppm. The pH was adjusted using hydrochloric acid. The internal
phase was prepared by solving the NaOH pellets in water to obtain a known molarity and the ELM was
created through emulsification in a 100 ml beaker. The external phase and the ELM phase were
contacted by pouring the ELM into the external phase solution, contained in a 250 ml beaker, while
stirring and samples were taken using syringes at different time intervals (0.5, 1, 2, 4, 7, 11 and 15 min,
however, not always for all times). As the Cr(VI) solution has a bright yellow colour it was possible to
use the colour change as a rough indication of whether the extraction had been successful or not, before
the concentration measurements were made by ICP-OES (see Figure 4.5). The external phase samples
were diluted if needed while carefully noting the dilution factor, transferred into “ICP-tubes” and taken
to the ICP-OES for concentration measurements. The concentrations of the samples obtained were
multiplied with the dilution factor and finally the removal efficiency was calculated. The parameters
investigated for the chromium extraction experiments can be seen in Table 4.2.
25 |
Chalmers University of Technology | Methodology and experimental techniques
Figure 4.5: The change in colour during the extraction experiments can be seen as a function of time: The samples to the left are
taken first and the yellow colour decreases as the extraction time increases, to the right.
Table 4.2: Parameters investigated in the metal extraction experiments.
Main area Parameter Range
ELM formulation Surfactant (Span 80) conc. 2.5-3 wt%
Butanol conc. 0-2 wt%
Tween 80 conc. 0-2 wt%
Stripping agent (NaOH) conc. 0-0.5 M
Carrier (TOMAC) 0-0.4 wt%
O/I phase mass ratio 2 to 3
External phase Cr concentration 100 ppm and some experiments
with 50 ppm and 10 ppm
Water type Distilled /de-ionized /tap water
Contacting external and Agitation speed 400 – 800 rpm
ELM phases Treat ratio 1:1 to 1:3
Since it was showed by Güell et al that the presence of various anions in high concentrations gave no
significant difference in terms of permeability for extraction of arsenic using SLM [52], the influence of
the type of water was investigated through preparing both the external phase and the internal phase of
the ELM with either de-ionized, distilled or tap water respectively.
4.5.3 Experimental design and optimization studies
For the experimental design study the Response Surface Method (RSM) was chosen. RSM is a collection
of mathematical and statistical techniques for modelling and analysis of problems in which the response is
influenced by several factors and the objective is to optimize this response. An experimental design of
orthogonal columns was used for fitting the response, shown in Table 4.3. As can be seen, the
parameters investigated were the agitation speed when contacting the external phase with the ELM, and
the amount of butanol and Span 80 respectively in the ELM.
Palm oil was used as diluent, the amounts of 1 wt% Tween 80 and 0.35 wt% TOMAC, and a stripping
concentration of 0.1 M NaOH were held constant. The ELM was prepared with an emulsification time
of 11 min and a homogenization speed of 3400 rpm. The initial chromium concentration was 100 ppm
and the pH of the external phase was 0.5. Due to the high viscosity of the ELM, an agitation speed of
more than 600 rpm was required when contacting of the external phase with the ELM, and therefore the
agitation was increased to 800 rpm for the first 30 seconds.
26 |
Chalmers University of Technology | Methodology and experimental techniques
Table 4.3: Experimental design performed.
X(coded) X(coded) X(coded) X(actual) X(actual) X(actual)
1 2 3 1 2 3
Agitation Span 80 Butanol
Run (rpm) (wt%) (wt%)
1 0 -1 0 600 2,5 0,5
2 0 1 0 600 3 0,5
3 1 -1 0 800 2,5 0,5
4 1 1 0 800 3 0,5
5 0 0 -1 600 2,75 0
6 0 0 1 600 2,75 1
7 1 0 -1 800 2,75 0
8 1 0 1 800 2,75 1
9 -1 -1 -1 400 2,5 0
10 -1 -1 1 400 2,5 1
11 -1 1 -1 400 3 0
12 -1 1 1 400 3 1
13 -1 0 0 400 2,75 0,5
14 -1 0 0 400 2,75 0,5
15 -1 0 0 400 2,75 0,5
16 -1 0 0 400 2,75 0,5
17 -1 0 0 400 2,75 0,5
18 -1 0 0 400 2,75 0,5
19 -1 0 0 400 2,75 0,5
The polynomial models used to describe the response are seen in Equation 4.2 (first order linear model
including interaction terms) and Equation 4.3 (second order linear model including interaction and
quadratic terms). The parameters ( ) are obtained by regression analysis.
n
Equation 4.2
Equation 4.3
The calculations and regression analysis were performed using a MATLAB programs designed by Jan
Rodmar.
4.6 Subproject 2: Arsenic extraction
4.6.1 Compatibility and pH ranges
As no previous studies of using TOMAC as carrier in an ELM for arsenic extraction were found, it was
necessary to verify the compatibility between the carrier TOMAC and the As(V) complex and to assess
the suitable pH range of the external phase. A series of simple liquid-liquid extraction experiments were
preformed, varying the pH of the external phase from pH 2 to pH 12. The external phase batch was
prepared by dissolving 0.416 g of HAsNa O *7H O in 1 L distilled water, hence obtaining a
2 4 2
concentration of approximately 100 ppm As(V)-ions. The solution was transferred into six separate
bottles, each pH adjusted to obtain pH 2, 4, 6, 8, 10 and 12 respectively using HCl(aq) and NaOH(aq).
A treat ratio of external phase to membrane phase 2 was desired to simulate ELM process conditions and
27 |
Chalmers University of Technology | Methodology and experimental techniques
a molar excess of TOMAC was needed, as the metal extracted would bind to the carrier in the organic
phase but due to the lack of internal phase, will not be released and therefore no regeneration of the
carrier is possible. At low and intermediate pH H AsO - and HAsO 2- are present, requiring a molar
2 4 4
ratio of TOMAC to As(V) of at 2:1, and at higher pH the ion AsO 3- is dominating, requiring a molar
4
ratio of 3:1. To facilitate the experiments, a molar ratio of at least 3:1 was kept for all solutions. The
extracting solvent (the organic phase) was prepared by mixing 6.6 g kerosene with minimum 0.04 g
TOMAC for 1 min at agitation speed 200 rpm. Some experiments were performed with 6.6 g palm oil
and minimum 0.04 g TOMAC as well, and 26 ml of the external phase was used in each experiment.
Then a new series of liquid-liquid extractions were performed, adjusting the pH of the external phase to
pH 6, 9 and 12, and in these experiments the agitation speed was varied from 200-800 rpm and
extraction time was varied from 3-11 min. The samples taken from the aqueous phase after the
separation were taken to the ICP-OES for concentration measurements.
4.6.2 Arsenic extraction using EILM
The next step was to perform metal extraction experiments with the EILM system and from the results
obtained in the liquid-liquid extraction a suitable pH of the external phase could be determined. The
internal phase was kept acidic, with concentration of 0.01-0.1 M HCl and the external phase was kept
basic, at pH>8, adjusted using NaOH(aq). The membranes were prepared by emulsification and
experiments were performed with both kerosene and palm oil as diluents, Span 80 as surfactant,
[BMIM]+[NTf ]- as stabilizer and in some experiments Tween 80 and butanol as co-surfactants. The
2
agitation speed was kept at 400 rpm.
28 |
Chalmers University of Technology | Results and discussion
5 RESULTS AND DISCUSSION
5.1 Subproject 1: using palm oil as diluent
5.1.1 Emulsion stability studies
5.1.1.1 Solubility tests – palm oil
The different components were mixed with the diluent in order to investigate the solubility, and the
observations are seen in Table 5.1. As shown, the ionic liquid [BMIM]+[NTf ]- does not seem compatible
2
with palm oil and consequently it is probably not a useful component in the palm oil based emulsion.
Table 5.1: Solubility tests.
Diluent Component/s Observation
Palm oil Tween 80 No sign of separation after one week
Palm oil [BMIM]+[NTf ]- Cloudy upon stirring. [BMIM]+[NTf ]- sinks to the
2 2
bottom after 1.5 hrs
Palm oil Span 80 No sign of separation after one week
Palm oil [BMIM]+[NTf ]- and Tween 80 [BMIM]+[NTf ]- sinks to the bottom after 10 min
2 2
5.1.1.2 Stability studies – palm oil
The main purpose of the emulsion stability studies was to evaluate the possibility of using palm oil as an
organic diluent for the metal extraction process. Important aspects of the emulsion used in the ELM are
viscosity and stability. General observations regarding stability and apparent viscosity are summarized
here
The stability of palm oil-based emulsion is increased by:
- Organic to aqueous phase weight ratio 3:1
- Use of Span 80 as surfactant
- Addition of butanol and/or Tween 80
The viscosity of palm oil-based emulsion is decreased by:
- Addition of Tween 80
- Lower homogenization speed <3500 rpm
Figure 5.1 shows an emulsion prepared with palm oil as organic
diluent, and containing 1 wt% Tween 80, 0.35 wt% TOMAC and 3
wt% Span 80. The solution is homogeneous and phase separation of
this emulsion was observed after approximately one hour.
It was found that a relatively high HLB (>7) was possible while
maintaining a w/o emulsion with palm oil as diluent and the organic
to internal phase ratio O/I=3, verified by dilution tests. It was also
found that the use of [BMIM]+[NTf ]- as a stabilizer did not enhance
2
the stability of the palm oil based emulsion. [BMIM]+[NTf ]- is not
2
soluble in palm oil, probably because of its polarity but also due to its
higher density compared to palm oil. The triglycerides and
diglycerides present in palm oil are generally not amphiphilic enough
to be soluble in water and may therefore not contribute to the
reduction of the surface tension between the aqueous and oil phase in
the emulsion, consequently the main inherent contribution to the
stability is the high viscosity of the oil, or the presence of other
Figure 5.1: A homogeneous emulsion
with palm oil as diluent.
29 |
Chalmers University of Technology | Results and discussion
surface-active compounds.
From the results obtained in the emulsion stability studies, extraction experiments were made using
emulsions containing palm oil, Span 80 (2.5-3 wt%), TOMAC (around 0.35 wt%), varying content of
Tween 80 and of butanol and with an O/I=3. The surfactant concentration of around 3 wt% is
consistent with previous studies [26], however, these studies use kerosene as diluent and due to the high
viscosity of palm oil extraction experiments were performed with a lower surfactant concentration, in
order to decrease the viscosity of the ELM.
As the addition of Tween 80 lowers the viscosity of the emulsion, it was desirable to develop a
formulation containing the mentioned component. It is also known that Tween 80 and Span 80 are
commonly used together to facilitate the formation of a multiple w/o/w emulsion, which will be
developed in the extraction experiments. The CPP of the surfactants also has an influence on the
stability of an emulsion and whether a w/o or an o/w emulsion is formed, as the CPP determines the
curvature of the emulsion droplets. An efficient packing of the surfactants in the interfaces makes the
emulsion droplets more stable and to achieve this butanol was used, which adsorbs in the w/o interface
and minimize the repulsion of the hydrophilic head-groups of the surfactants. An increased stability time
was observed for emulsions prepared with butanol as co-surfactant, and because of this extraction
experiments containing butanol were carried out.
The homogenization speed in the emulsification step was kept at the lowest level for all extraction
experiments, 3200-3400 rpm, because a higher speed resulted in a highly viscous, “mayonnaise-like”
emulsion not sufficient for extraction. One reason for this may be due to a foaming mechanism, where
air-bubbles are incorporated into the emulsion phase. The viscosity may also increase due to a higher
dispersion of the internal phase and a larger number of smaller internal droplets, which may lead to a
more rigid system. The emulsification time, including the time required for addition of the internal
phase was kept at 11 min. Microscopic studies of the emulsion droplets size, how they are affected with
respect to homogenization speed and also the change in size with respect to time remain to be explored.
The more precise properties of palm oil and how these interact with the components of an ELM
formulation also need more thorough investigations.
5.1.1.3 Stability studies – kerosene
Some stability studies were performed with kerosene as diluent, to verify suitability of the emulsion
formulation known from previous studies and also to investigate the possibility of a further increase in
emulsion stability through the addition of co-surfactants. The kerosene-based emulsions are used for the
extraction of arsenic in subproject 2.
The stability of kerosene-based emulsion is increased by:
- Organic to Internal phase weight ratio 1
- Addition of butanol and/or Tween 80
- Homogenization speed > 7000 rpm
It was observed that the use of [BMIM]+[NTf ]- as stabilizer was necessary for the emulsion to be stable
2
for longer than 30 min. As the viscosity of kerosene is low, the emulsion also has a very low viscosity.
One reason for the increased stability of the emulsion containing [BMIM]+[NTf ]- is that it increases the
2
viscosity, another is that it (like surfactants) decreases the surface tension of the o/w interface.
5.1.2 Chromium extraction experiments
To investigate the possibility of using palm oil as an alternative organic diluent in the ELM separation
process, numerous chromium extraction experiments were performed. The results conclusively showed
that the use of palm oil as an organic diluent seems to work well and also that the high viscosity of palm
oil does not seem to cause problems in terms of extraction efficiency. However, it was observed that
when contacting of the external phase with the ELM phase a higher agitation speed (>600 rpm)
30 |
Chalmers University of Technology | Results and discussion
compared to the kerosene-based ELM (<400 rpm) was needed in order for the solutions to mix well,
and it was also observed that the use of Tween 80 as a co-surfactant in the membrane phase facilitated
the mixing remarkably. Tween 80 is a highly hydrophilic surfactant and should therefore not be soluble
in the oil phase of the system. As it is incorporated during the emulsification of the ELM phase, and
therefore present at the membrane-internal interface, it is believed that some Tween 80 molecules are
transported by microscopically small water droplets to the membrane-external interface, which lowers
the interfacial tension and facilitates the second emulsification.
The treat ratio (F/ELM) was kept constant at 2, the initial metal concentration was 100 ppm and the pH
of the external phase adjusted to 0.5 using HCl(aq) in all experiments, unless otherwise stated. Figure
5.2 shows how a sample is taken during the extraction experiments.
Figure 5.2: The taking of sample during contacting of ELM and external phase for extraction of chromium.
5.1.2.1 The use of palm oil as organic diluent
In Figure 5.3 the removal efficiency of chromium for three different ELM formulations is displayed as a
function of extraction time with the internal stripping agent concentration of 0.1 M NaOH.
Chromium extraction with ELM using palm oil solvent
100%
95%
l 90%
a
v
o 85%
m
e 80%
r
m 75%
DI (1); Sp80, Tw80
u
i 70% DI (2); Sp80,Tw80,ButOH
m
o 65% Dest (1);Sp80,Tw80
r
h
C 60% Dest (2); Sp80,Tw80,ButOH
55% Dest (3);Sp80,ButOH
50%
0 2 4 6 8 10 12 14 16
Extraction time (min)
Figure 5.3: The graph shows removal of chromium as a function of time. Data are plotted for three ELM formulations as shown
in the legend, all containing TOMAC. “DI” and “Dest” denote de-ionized and distilled water respectively. Sp80, Tw80 and
ButOH denote Span 80, Tween 80 and butanol respectively. NB: the y-axis starts at 50%.
31 |
Chalmers University of Technology | Results and discussion
The systems denoted with “DI” in the figure above were prepared using doubly de-ionized water for
both the external phase and for the internal phase, and the systems denoted “Dest” were prepared using
distilled water in the mentioned phases. The numbers denote the different formulations of the
membranes, of which all contains approximately 0.35 wt% TOMAC, 3 wt% Span 80 and have O/I=3.
The first ELM formulation (DI (1) and Dest (1)) contains 1 wt% Tween 80 in addition to the already
stated components, and this formulation shows the highest extraction rate. The second ELM
formulation, DI (2) and Dest (2), also contains 1 wt% Tween 80 and in addition to this 1 wt% butanol.
The Dest (3) formulation contains 1 wt% butanol besides the stated components and has the poorest
performance in terms of removal efficiency. The concentration of TOMAC was chosen to obtain a
molar ratio of more than 2 moles TOMAC for each mole of chromium ions, this to ensure that the
extraction is not hindered by the lack of carriers available.
The expected appearance of the removal efficiency as a function of time is a steady increase towards a
maximum extraction, however some of the results, in particular Dest (3), show fluctuations in the
removal percentage. A reason for these fluctuations is probably that the mixing of the external phase and
the ELM phase is not entirely homogeneous, reflected in the samples taken during the experiment. A
poor mixing leads to a decrease of the surface area available for mass transfer and will lower the
extraction efficiency. As previously mentioned, the presence of Tween 80 in the membrane phase
decreases the fluctuations, due to the facilitation of creating the multiple w/o/w emulsion, and this can
be seen in Figure 5.3 when comparing the removal of the Dest (3) experiment to the other results, as
this is the only formulation not containing Tween 80. The presence of fluctuations is especially present
for the first two minutes of the extraction, and this will be observed in various results throughout the
project. The agitation speed when contacting the external phase with the ELM also has a significant
influence of the extraction efficiency. In the experiments of DI (1) and Dest (1) the agitation speed was
kept at 800 rpm for the first minute and then lowered to 400 rpm, while in the three other experiments
the agitation was kept at a constant speed of 600 rpm. It was observed that an initial agitation speed
below 600 rpm resulted in poor mixing, but the agitation could be lowered once the solution had
achieved a somewhat homogeneous appearance.
5.1.2.2 Effect of carrier concentration on chromium extraction
To verify the function of the carrier TOMAC in the palm oil-based ELM, metal extraction experiments
were carried out with a membrane phase prepared without the incorporation of TOMAC. The carrier
has a significant influence on the extraction process, it is not needed in a large amount but its absence
would lead to a large reduction of the removal efficiency, see Figure 5.4 below. As can be seen in the
graph, only a small fraction of the metal is extracted in the absence of carrier, the removal is only
facilitated by the mass transfer of the metal through the membrane to the internal phase, in which a
reaction with the stripping agent NaOH occurs. As the pH of the external phase is kept at 0.5,
chromium exists in an anionic form, quite reluctant to be soluble in the oil phase of the membrane.
32 |
Chalmers University of Technology | Results and discussion
Effect of carrier (TOMAC) concentration
100%
90%
80%
l
a
v 70%
o
m
e 60% No carrier (1)
r
m 50% No carrier (2)
u
i 40% 0.35 wt% carrier
m
o
r 30%
h
C
20%
10%
0%
0 5 10 15 20
Extraction time (min)
Figure 5.4: Efficiency of chromium removal for ELM formulations with carrier (pink) and without carrier (red)
The results from the experiments in Figure 5.4 confirm the need for an incorporated carrier in the ELM
formulation and evidence the role and impact of TOMAC on the overall process.
5.1.2.3 Effect of water type on chromium extraction
Three types of waters were compared; double de-ionized water, distilled water and tap water. The
results indicate no significant difference in removal efficiency when varying the water type, which can be
seen in Figure 5.5. The graph shows duplicated experiments conducted with ELM formulations identical
to DI (1) and Dest (1) stated above.
De-ionized-/Distilled-/Tap contaminated water
Palm oil based membranes containing 3wt% Span 80, 0.35wt% TOMAC and 1wt%
Tween 80
100%
l
90%
a
v
o
m
80%
e
r
m
u Deionized water
i 70%
m
Distilled water
o
r
h Tapwater
C 60%
50%
0 5 10 15 20
Extraction time (min)
Figure 5.5: Extraction efficiency for different water types.
From the results in the graph above, the influence of the water type seems to be negligible in terms of
final removal efficiency and the same results were obtained in other experiments carried out with
varying ELM content. This means that the system is not disturbed by the presence of other ionic species
33 |
Chalmers University of Technology | Results and discussion
in the water, which is beneficial when considering the implementation of the ELM technique in industry.
The tap water in Malaysia contains iron and other ionic species such as chlorides, sulphates and nitrates.
5.1.2.4 Effect of internal stripping agent concentration on chromium extraction
The stripping agent used for the extraction of chromium was NaOH, and the effect of its concentration
on the removal efficiency can be seen in Figure 5.6. The experiments were performed to ensure that a
similar optimal concentration of NaOH was obtained in the ELM with palm oil as diluent compared to
previous studies of ELM with kerosene as diluent.
Stripping agent concentration
100%
90%
n
i 80%
m
7
t
a 70%
l
a
v
o
m
60% Membrane A, Dest
e
R
Membrane B, Dest
50% Membrane A, DI
Membrane B, DI
40%
0 0,1 0,2 0,3 0,4 0,5
NaOH concentration (M)
Figure 5.6: The effect of stripping agent concentration on the extraction of chromium. “Dest” denotes distilled water and “DI”
denoted de-ionized water. Both membranes contains 3 wt% Span 80, 1 wt% Tween 80, 0.35 wt% TOMAC and membrane A
contains 1 wt% butanol in addition.
The graph shows the removal percentage of a sample taken at an extraction time of 7 min as a function
of stripping agent concentration. The same trend is observed regardless of water type: the efficiency is
highest when the internal phase has a concentration of around 0.1 M NaOH, with a pH of around 11.4.
This result is consistent with the results obtained by Goyal et al for an ELM with kerosene as diluent. A
concentration of NaOH higher than 0.1 M leads to a strong pH gradient, increasing the difference in
osmotic pressure and consequently the risk of swelling of the internal droplets, which eventually leads a
rupture of the membrane. The consequence of the rupture is that the internal phase is released into the
external phase, which reduces the amount of NaOH available for the stripping reaction of the metal
complex. Another explanation can be that NaOH has a tendency to react with Span 80 [54], thereby
modifying the properties of these components through forming other compounds that decrease the
emulsion stability.
5.1.2.5 Effect of external phase concentration on chromium extraction
The graph in Figure 5.7 shows the removal efficiency when the initial concentration of chromium was
50 ppm. As can be seen in the graph, the extraction is very fast and almost all chromium is extracted.
34 |
Chalmers University of Technology | Results and discussion
Optimization and interaction: membrane based on palm oil
PX1
105%
PX2
PX3
100%
PX4
95% PX5
l
a PX6
v
o 90% PX7
m
e r 85% PX8
m PX9
u 80% PX10
i
m
PX11
o 75%
r PX12
h
C PX13
70%
PX14
65% PX15
PX16
60%
PX17
0 2 4 6 8 10 12 14 16
PX18
Extraction time (min) PX19
Figure 5.8: Removal of chromium (%) as a function of time from the experiments of the interaction and optimization studies.
The y-axis starts at 60 %
Regarding the design of the experiments, a mistake was made
when planning the trials. The experimental runs were based
on a three variable Box Behnken design [55]. The low value
for the agitation speed was accidently assumed to be the
centre point, obtaining the design shown in Figure 5.9, still
maintaining orthogonal columns. The use of orthogonal
experimental points provides accuracy of the model and
makes it possible to study linear and interaction effects.
MATLAB statistical tools were used for all calculations.
The empirical model used was fitted to the response through a
Figure 5.9: The experimental design performed.
regression analysis, and the best fit was obtained when
including the linear, interaction and squared terms. However, the results from the experiments in the
design have a low variance at high extraction time (15 min) and the results obtained at 1 min had a too
high variance, consequently only the removal percentage after 7 min could be used in order to get a
significant model. The parameter table obtained is seen in Table 5.2, and a p-value<0.05 signifies that
the parameter in question is significant11.
11 If a parameter is not significant it means that it has a very small influence on the response. According to the hierarchy
principle, which indicates that if a model contains a high-order term (i.e. XX), it should also contain all of the lower order
1 3
terms that compose it (i.e. X and X). Because of this, X and X with corresponding parameters are also included in the
1 3 1 3
model [55].
36 |
Chalmers University of Technology | Results and discussion
̂
Equation 5.1
The optimized values obtained from this model were an agitation speed of 522.6 (X =-0.387), a Span 80
1
concentration of 2.58 wt% (X =-0.680) and a butanol concentration of 0.515 wt% (X =0.031). The
2 3
optimal response from this model is a removal of 99.88% chromium.
As can be seen, the interaction between the agitation speed and the concentration of Span 80 is
significant. Figure 5.10 shows the response surface from the model, where the concentration of butanol
is kept at its optimum and the interaction between Span 80 concentration and agitation speed can be
seen. The interaction can be explained in terms of stability and viscosity of the membrane; an increase in
Span 80 concentration contributes to an increase in the stability of the emulsion due to the decrease in
interfacial energy of the oil and water interface. However, it also increases the viscosity of the
membrane. If the viscosity is increased, a higher agitation speed is required for the external and
membrane phase to mix well, but this also induces shear stress on the membrane, which could
contribute to emulsion breakage. Therefore, at higher agitation speed, a higher concentration of Span 80
is required to compensate for this. The same reasoning can be applied for a lower agitation speed,
allowing a lower concentration of Span 80, and the optimum response was found when both the
agitation speed and the concentration of Span 80 are lowered below their centre-points in these
experiments.
Figure 5.10: Response surface plot for the interaction of Span 80 concentration and agitation speed. Butanol concentration is
held constant at 0.515 wt% (X3=0.031).
Figure 5.11 shows the response surface plot of the interaction between the concentration of butanol and
the agitation speed, which is also significant. The concentration of Span 80 is held constant at the
optimum level, and the plot shows that a higher agitation speed and a lower concentration of butanol
(close to 0 wt%) result in a lower response, which could be explained by a decreased stability of the
membrane. Butanol is believed to enhance the stability of the membrane by acting as a co-surfactant in
the emulsion, through adsorbing at the w/o interface and in that way minimize the repulsion of the
38 |
Chalmers University of Technology | Results and discussion
hydrophilic head groups of the surfactants. This reduces the interfacial tension of the w/o interface,
gives a higher water solubilisation and decrease the water droplet size. However, an increased amount of
butanol together with a decreased agitation speed also lowers the response, and the optimum was found
at a concentration around 0.5 wt%. As the butanol is soluble in both the water phases and the oil phase it
may, when present in higher concentrations, migrate from the interfaces to the external phase and react
with HCl, producing a chlorobutane and water. This would increase the pH in the external phase and
affect the extraction rate, since the pH gradient is critical for efficient extraction.
Figure 5.11: Response surface plot for the interaction of butanol concentration and agitation speed. Span 80 concentration is
held constant at 2.58 wt% (X2=-0.680)
5.2 Subproject 2: arsenic extraction
5.2.1 Compatibility of arsenic and TOMAC
To verify the compatibility of arsenic(V) together with TOMAC, simple liquid-liquid extractions were
performed to investigate at what pH range TOMAC creates a complex with the arsenic(V) ion
compound. As mentioned in Section 3.4.2, the basic property of TOMAC favours reaction with anionic
complexes. Arsenic(V) exists in the form of H AsO , H AsO − and HAsO 2− and AsO 3− in the different
3 4 2 4 4 4
pH investigated. At higher pH, HAsO 2− is dominant, while H AsO and AsO 3− may be present in
4 3 4 4
strong acidic or strong basic conditions respectively, see Figure 5.12 or Figure 3.12 [13].
The results from the experiments using kerosene as diluent in the liquid-liquid extraction are shown in
Figure 5.12 (blue line). The results are in agreement with literature, where TOMAC unlikely reacts
with the neutral arsenic complex H AsO under acidic conditions (pH 2-4) and prefers to react with the
3 4
anionicH AsO
−
and HAsO
2−
when increasing the pH (pH>4). The figure also shows that in the case
2 4 4
where kerosene is used as diluent, for strong basic conditions (pH>10), the removal of arsenic(V)
decreases. Similar liquid-liquid extraction experiments were performed using palm oil as diluent, see
Figure 5.12 (brown line), this to study the flexibility of the choice and role of the diluents in the ELM
process. The few experiments that were performed showed that the extraction rate is considerably
lower compared to the use of kerosene as diluent, and that the extraction increases with increasing pH.
However, at pH 10, which was the optimal pH for extraction when using kerosene as diluent, the
39 |
Chalmers University of Technology | Results and discussion
extraction of arsenic when using palm oil as diluent is still very low. It was observed during the
experiment that the viscosity of the mixed palm oil and external phase increased at this pH range, and
this could be a reason for the lower extraction. In any case, the two experiments show different
maximum values.
Liquid-Liquid Extraction
70%
60%
50%
l
a
v
o
m 40%
e
r Palm oil based
c
i n 30%
Kerosene based
e
s
r
A
20%
10%
0%
2 4 6 8 10 12
pH
Figure 5.12: The removal efficiency of arsenic using kerosene (blue) and palm oil (brown) as diluent and the predominated
species of arsenic(V) for various of pH.
Considering the results from the kerosene based extraction, an explanation to why the extraction
decreases at pH 12 (where the concentration of HAsO 2− and AsO 3− is equal according to literature) is
4 4
either because there is no extraction of AsO 3− and all of the HAsO 2− species are extracted, or that there
4 4
is a lower extraction for both species. If TOMAC extracts HAsO 2− to a larger extent than AsO 3− the
4 4
explanation could be that because there is only one electron deficient nitrogen present in TOMAC, and
the latter reaction requires a higher amount of moles of TOMAC, the complex is unlikely to be created.
It could be simply a charge effect, as TOMAC has a single positive charge it rather forms a complex with
anionic specie with a charge of the same magnitude. However, if this would be the case, TOMAC should
−
have extracted H AsO at the lower pH conditions. Li and Yan mentioned that arsenic(V) may create a
2 4
large complex in the presence of strong acid ([AsCl ]+[AsCl ]-), which is unlikely to penetrate the
4 6
membrane in the ELM process [48], and this could also hinder the extraction during the simple liquid-
liquid extraction process. Similar impact could be the case in the presence of strong basic conditions,
where the complex would be too large to penetrate the oil phase. In addition, the presence of a high
amount of chloride ions at low pH could decrease the selectivity of TOMAC for the reaction with the
arsenic(V) species and the same reasoning may be applied for strong basic conditions, where a high
concentration of hydroxide ions exist in the external phase that could decrease the selectivity.
A series of simple liquid-liquid extraction was also performed using three different agitation speeds and
extraction times, and showed that the same overall trend occurred regardless. Finally, the results show
that TOMAC is compatible with the anionic complex of arsenic(V), preferably at the pH range of 9-10
40 |
Chalmers University of Technology | Results and discussion
and give a direction of the kind of optimal conditions preferred for the external phase or waste water
when extracting arsenic(V) using TOMAC as a carrier in the ELM process.
5.2.2 Arsenic extraction using EILM
No results were obtained that showed a consistent extraction of arsenic using the EILM formulation
containing kerosene, Span 80, [BMIM]+[NTf ]-, TOMAC and/or Tween 80 and/or butanol, even
2
though the liquid-liquid extraction showed that TOMAC is compatible with the arsenic complex and
suitable external phase conditions were created (basic conditions, pH adjusted using NaOH) for the
reaction to occur. The concentration of TOMAC was chosen to obtain a molar ratio of 3:1
(TOMAC:As(V)) to ensure that the transport is not hindered by the lack of extractant. The internal
phase was kept acidic to create a pH gradient, by varying the concentration of HCl from 0.1 to 0.01 M.
An expected stripping reaction would yield [AsCl ]+[AsCl ]-, a large complex unlikely to diffuse back to
4 6
the external phase. The formation of H AsO is also likely to occur in the internal phase, due to the high
3 4
presence of H+ ions.
Instead the results showed an increase in arsenic concentration in the external phase and no sign of metal
removal, see Figure 5.13. The only way in which the concentration of arsenic can increase in the
external phase is if water from the external phase is somehow removed. The difference in osmotic
pressure contributes to a transport of water molecules to the internal phase, where the internal phase
droplets increase in size. An increased amount of acid in the internal phase would lead to an increased
pH difference between the internal and external phase, this would increase water permeability in the
membrane. Since chemical potential difference between the internal phase and the external phase is the
driving force for osmotic swelling an increase in the chemical potential difference will contribute to an
increase in the osmotic pressure.
Wan and Zhang have observed that the type of surfactant used affects the swelling phenomena,14 and the
use of amide-based surfactants with higher molecular weight is superior compared to the use of Span 80.
The low molecular weight and the large hydrophilic group in Span 80 comprised of oxygen with high
electronegativity have a higher hydration capacity and larger diffusivity compared to surfactants with a
higher molecular weight and hydrophilic groups mainly comprised of nitrogen with relatively low
electronegativity [56].
The organic to internal phase ratio was in most cases kept at 1, which is higher than recommended but
chosen due to the increased emulsion stability, and questions arise whether that would be the problem in
terms of risk for a phase inversion of the ELM phase. Sabry et al [57] showed that the internal phase
volume fraction (I/O) cannot be increased indefinitely, they found an optimum value of the O/I ratio at
1 for lead removal, and because of this the suspected phase inversion may be discarded. Besides, if the
phase inversion occurs it dilutes the external phase with the released aqueous internal phase.
14 Swelling increased in the following order: Span 80>Lan113A>ENJ-3029>LMA.
41 |
Chalmers University of Technology | Results and discussion
KEx1
Arsenic extraction
KEx5
140
KEx6
)
130 KEx17
m
KEx18
p
p 120
( KEx19
n
o i 110 KEx20
t
a
r KEx22
t
n 100
e KEx23
c
n
o 90 KEx24
c
c KEx25
i n 80
e KEx26
s
r
A 70 KEx27
KEx35
60
KEx36
0,5 1 2 4 7 11
KEx37
Time (min)
Initial conc.
Figure 5.13: Results obtained from the arsenic extraction. The black line shows the initial concentration of arsenic in the
external phase.
Samples taken from the external phase after the agitation had been stopped and the system had been left
undisturbed for a couple of hours, hence obtaining a complete phase separation between the ELM phase
and the external phase, also showed an increase in arsenic concentration (an average increase of 44% was
observed). This is questionable because the w/o emulsion should have been broken, leading to a leakage
of the internal phase out to the external phase, which would give the initial arsenic concentration or less
due to dilution. However, emulsions can be very concentrated with above 90 % dispersed phase [31]
and if the emulsion is still stable a further uptake of water is therefore possible. For spherical droplets it
would require a broad distribution of droplet size, with smaller droplets filling the space between larger
ones. This is doubtful due to the lack of supplied mechanical energy when the agitation has been
stopped. Other packing structures such as hexagonal packing might be possible, depending on the
structure and interactions of the surfactants and the ionic liquid [BMIM]+[NTf ]−.
2
Furthermore, there are other reasons that could have hindered the extraction but these do not correlate
truly to the results nor do they explain the increase in concentration. For example, the hydrochloric acid
in the internal phase is quite likely to react with the esters of both Span 80 and Tween 80, which results
in a partial loss of the surface-active properties and affect the ELM both in terms of viscosity and
stability. Because the carrier has shown compatibility with the arsenic complex, it is not likely that the
carrier is the issue. However, TOMAC could react with hydroxide ions in the external phase or the
chloride ions in the internal phase, which decreases the selectivity towards arsenic.
42 |
Chalmers University of Technology | Conclusions
6 CONCLUSIONS
We have found that the petro-chemically based diluent kerosene used in previous studies can be
exchanged for the vegetable and more environmentally friendly palm oil. The use of palm oil as a diluent
in the w/o emulsion was successful and the stability of the ELM was achieved for a time sufficient for the
extraction to occur satisfactory. When concerning the creation of the w/o emulsion and the parameters
effecting the emulsion stability a homogenization speed higher than 3500 rpm (for a solution contained
in a 100 ml beaker) resulted in an emulsion highly viscous not suitable for extraction. The use of Tween
80 as a co-surfactant was beneficial as it decreased the viscosity of the emulsion and in addition a notable
difference was observed when the external phase and the ELM phase were contacted for the extraction
experiments: a more homogeneous solution was obtained when Tween 80 was present. The mixing is
important, a high dispersion of the ELM phase increases the surface area available for mass transfer and a
faster removal rate could be observed. The use of butanol as co-surfactant enhanced the stability of the
emulsion, but might not be necessary for an ELM formulation used for extraction. Palm oil has a high
viscosity, which is beneficial regarding the stability of emulsions but disadvantageous regarding the
increased mass transfer resistance, but because the extraction of chromium was successful it can be
concluded that the high viscosity of the palm oil does not decrease the extraction rate in our system.
The use of palm oil as organic diluent in emulsions and ELM formulations has many benefits. Palm oil is
non-toxic, it is produced from renewable resources, and it is also cheap and readily available in
Southeast Asia. Palm oil is an important economical income source for Malaysia and Indonesia, the main
producers of the oil, and the productivity is high compared to many other vegetable oils. On the other
hand, the production of palm oil is controversial and contributes to the devastation of rainforests in
Malaysia and Indonesia in particular, and the ecosystem is destroyed when bio-diverse rainforest is
replaced by the monoculture of oil palm trees plantations.
Experiments carried out with a low initial concentration of chromium in the external phase resulted in
complete removal of the metal, and the lower the initial concentration was, the higher was the
extraction rate. Whether the source of water and the presence of other ions in the external phase could
effect the removal efficiency was studied by comparing external phases based on de-ionized water,
distilled water and tap water. The results showed that extraction efficiency is not significantly affected
by the difference in purity between the three investigated water types. This means that the system is
robust and may be developed further for real industrial applications aimed at the removal of metals from
waste water where various ions may be present.
The stripping agent concentration is important in regards to the emulsion stability, and a concentration
higher than 0.1 M NaOH resulted in decreased removal efficiency presumably due to the high difference
in osmotic pressure between the internal and external phase. The presence of carrier is crucial for an
optimal chromium extraction. The absence of carrier, when the only transport mechanism is diffusion,
resulted in 10-20% chromium extraction, meanwhile the presence of carrier results in >90% chromium
extraction. An experimental design was performed and MATLAB software was used as modelling tool.
The optimized parameters obtained gave a Span 80 concentration of 2.58 wt%, butanol concentration of
0.515 wt% and agitation speed of 522.6 rpm, and the optimal response from the modelling was a
removal of 99.88% chromium. The interaction studies showed that, in general, for a higher agitation
speed when contacting the external and ELM phases, a more stable emulsion is required, this achieved
by a higher content of surfactant or co-surfactant in the ELM formulation. At a lower agitation speed,
the content of surfactant need to be decreased, to decrease the viscosity and facilitate a homogeneous
mixing. Our project has showed that the many factors that influence the efficiency of the extraction of
43 |
Chalmers University of Technology | Conclusions
chromium also have their trade-offs and interactions, and optimization studies are required to obtain an
optimal formulation.
In subproject 2 the extraction of pentavalent arsenic was studied and the simple liquid-liquid extraction
experiments showed that the carrier TOMAC is compatible with the arsenic complex and that transport
exists. The preferable pH condition in the external phase was in the range of 9-10, where the highest
extraction was observed, which induced modification of the EILM system where the internal phase was
kept acidic to maintain a pH gradient. With the compiled optimal ELIM composition, having kerosene as
diluent, extraction experiments were performed without any significant removal of the metal, instead an
increase in the arsenic concentration was observed. This could be due to membrane swelling,
contributed by the osmotic pressure, causing an uptake of water to the internal phase of the ELIM,
which would increase the arsenic concentration in external phase. The system needs to be improved and
further studied in order to improve the system to achieve the extraction. This can be accomplished by
choosing other components in the system, for example another carrier, surfactant or stripping agent.
44 |
Chalmers University of Technology | 7 FUTURE WORK
This project has explored the application of ELM for the extraction of heavy metals with a focus on using
environmental friendly materials, and since ELM have shown to be an economical and efficient way of
treating waste water it is important to continue improving the process.
An important parameter is the stirring rate, which must be adjusted to achieve a uniform dispersion of the
ELM phase and to obtain a high surface area during the extraction. Two areas of future work is to use
impellers of different blade size and to use beakers with baffles, which both are ways to improve the
overall mixing performance and avoid dead zones in the beaker.
The emulsion stability has a large impact on the extraction efficiency; it should be stable for sufficient time
and withstand high agitation speed when contacted with the external phase. Further investigation is
needed regarding the emulsion used in subproject 2, where the internal to membrane ratio of 1 may not
have been optimal for the extraction but necessary for keeping the emulsion stable. High priority should
also be given to further explore the use of vegetable oil as diluent. Other stripping agents should be
investigated to study whether a reaction of TOMAC and hydrochloric acid occurs that hinders the
extraction. The use of other surfactants with a lower hydration capacity and a higher stability in acidic
conditions could also be investigated.
The ELM used in subproject 1 contributed to almost 100% extraction, slightly depending on the initial
concentration. However, the influence and interactions of all parameters affecting the process, in addition
to the ones studied in this project, need to be studied in more detail to optimize the process. It is also
possible to study whether the amount of the chemicals could be minimized, which would decrease the
overall cost of the process. The de-emulsification step needs more attention, as this constitute the most
difficult part of an ELM process. If all components in the system, including recovered metals, can be
reused in an efficient way, the overall costs will be further reduced. The water recoverability is another
important aspect is to further investigate. If all the water can be reused then the process could also be
implemented in water scarce areas.
Regarding the extraction of chromium, the choice of diluent did not have a crucial impact and the question
arises whether palm oil could be replaced for another vegetable oil, such as rapeseed oil, to make the
process more flexible and easier to use in regards to the material availability.
Furthermore, no measurements of the size of the droplets in the emulsion were made and no quantitative
emulsion stability studies were performed. Because of this, further investigations are needed regarding
the development of an optimized emulsion formulation to be used for ELM metal extraction. A deeper
understanding of the interaction that occurs between TOMAC, internal agent, diluent, Span 80,
[BMIM]+[NTf ]-, Tween 80 and butanol is necessary to be able to confirm and understand why the
2
process works or not and for the creation of more reliable and efficient ELM process. This deeper
knowledge would facilitate the incorporation of other components that could improve the ELM
extraction or make the ELM process useful for the extraction of other metals without any larger change
of the system.
45 |
Chalmers University of Technology | APPENDIX I
Table I-I: Type of carrier, surfactant, internal solution, external solution and diluents used in metals extraction using ELM
processes [30].
Carrier Metal ion External Surfactant Internal Diluent
solution solution
Cyanex 272 Cu CuSO ECA5025 6N H SO Tetradecane
4 2 4
LIX 63/LIX Cu salt Span 80 HCl Kerosene
64N
Cyanex 272 Ni NiNO ECA5025 6N H SO Tetradecane
3 2 4
D2EHPA NiCl HNO Kerosene
2 3
PC-88A NiSO Span 80 Dil H SO n-Heptane
4 2 4
Cyanex Zn ZnSO ECA5025 6N H SO Tetradecane
4 2 4
272/DEHPA
D2EHPA ZnCl Span 80 HNO Kerosene
2 3
DEHMTPA ZnSO ECA5025 Thiourea n-Dodecane
4
D2EHPA Ag AgNO Span 80 HNO Toluene
3 3
D2EHPA Pb Pb(NO ) ECA5025 HCl Toluene
3 2
PC-88A Co CoSO PX 100 H SO Paraffin oil
4 2 4
MSP-8 Pd Simulated ECA4360 H SO n-Heptane
2 4
waste
TOA Hg HgCl Span 80 NaOH Toluene
2
Adogen Cd Pure Cd Span 80 NaOH Dimethyl
Benzene
Primene JMT Ag Ag salt Not H SO Tetradecane
2 4
mentioned
Aliquat 336 MO Na-Mo salt Monesan NaOH Kerosene,
Heptane
Aliquat 336 Cr Cr(IV) Span 80 NaOH Kerosene
Table I-II: ELM Systems for the Separation of Chromium and Arsenic
Solute External Extractant Surfactant Diluent Internal Effciency Reference
Feed phase phase recovery
Arsenic 5.5 mg=L 10 vol% 2- 2 vol% 88 vol% 2 M >95% [58]
As(III) ethylhexanol ECA n-heptane NaOH
(as 4360 >95%
As(OH)3) polyamine
in
0.4 M
H2SO4
Chromium HCl Aliquat 336 3 wt% Kerosene 0.1 M [26]
SPAN 80 NaOH
Chromium Cr2O2 20% tri-n- 4%–5% n-Hexane 0.1 N >99% [59]
7 in 0.5 N butyl SPAN 80 NaOH
H2SO4 phosphate
(TBP)
50 |
Chalmers University of Technology | Figure II-I: The plot shows the studentized residuals vs. the experimental number (PX1-PX19)
APPENDIX III
Table III-I: Emulsion formulation of arsenic ELM extraction. The compositions of emulsion K13,K8 and K9 is shown below.
External phase ELM content
Run F/ELM (treat ratio) pH Emulsion HCl (M) TOMAC (wt%)
KEx1 2 6 K13 0,05 0,35
KEx5 1 6 K9 0,05 0,35
KEx6 3 6 K9 0,05 0,35
KEx17 1 9 K13 0,05 0,35
KEx18 3 9 K13 0,05 0,35
KEx19 1 9 K8 0,05 0,35
KEx20 3 9 K8 0,05 0,35
KEx22 2 9 K13 0,1 0,35
KEx23 2 9 K8 0,01 0,35
KEx24 2 9 K8 0,1 0,35
KEx26 2 9 K13 0,05 0,45
KEx27 2 9 K8 0,05 0,25
KEx35 3 9 K9 0,05 0,25
KEx36 3 9 K9 0,05 0,45
KEx37 3 9 K9 0,05 0,35
Table III-II: The formulation of emulsion K8, K9 and K13
Content: Span 80 Tween 80 Butanol [BMIM][NTf] I/O
2
Emulsion (weight %) (weight %) (weight %) (weight %) (mass ratio)
K8 3 0 0 3 1
K9 3 1 1 3 1
K13 3 1 1 1 1
54 |
Chalmers University of Technology | Risk Assessment for South Africa’s first direct wastewater reclamation system for drinking
water production
Beaufort West, South Africa
Master of Science Thesis in the Master’s Programme Geo and Water Engineering
OLLE IVARSSON, ANDREAS OLANDER
Department of Civil and Environmental Engineering
Division of Water and Environment Technology
Chalmers University of Technology
ABSTRACT
In Beaufort West, South Africa’s first direct wastewater reclamation plant (WRP) for the
production of drinking water was constructed in the end of 2010 as a result of acute water
scarcity. Due to high pathogen load and limited knowledge of WRP’s a risk assessment were
conducted. Information and knowledge were gathered during a study visit to the world’s first
direct reclamation plant in Windhoek, Namibia. As suggested by the EU project TECHNEAU
risks were not only assessed by water quality, but also by water delivery interruptions
(quantity). The system boundaries were defined in such a way that the new reclamation
system could be stressed and risks originating from the reclamation system could be
identified. Hazards were identified by using a hazard database also developed by
TECHNEAU, and an early version of a hazard database from South Africa’s Water Research
Commission. The databases were useful, but to general to be used without modification of the
defined hazards.
The risk analysis was performed by using risk matrices, and an ALARP approach when
evaluating the risks. Originally, 70 risks were identified as valid to the system and five critical
risks were identified, one quality related risk and four quantity related risks. The most
important treatment barrier used in Beaufort West is reverse osmosis, which has high
treatment efficiency with very few pathogens able to pass through. Therefore fewer quality-
related risks were identified compared to quantity related risks. By the use of Multi-Criteria
Decision Analysis, suggested risk reduction measures were ranked by costs and reduced risk
in both quantitative and qualitative terms.
Key words: Risk Assessment, MCDA, Wastewater Reclamation, Water Scarcity, South
Africa, Beaufort West
I |
Chalmers University of Technology | Riskanalys för Sydafrikas första direktreklamationsanläggning av avloppsvatten för
framställning av dricksvatten
Beaufort West, Sydafrika
Examensarbete inom Geo and Water Engineering
OLLE IVARSSON, ANDREAS OLANDER
Institutionen för bygg- och miljöteknik
Avdelningen för Vatten och Miljöteknik
Chalmers tekniska högskola
SAMMANFATTNING
I Beaufort West, konstruerades Sydafrikas första anläggning för direktreklamation av
avloppsvatten för framställning av dricksvatten i slutet av 2010 efter en längre period av akut
vattenbrist. På grund av den höga koncentrationen av patogener i råvattnet och begränsade
kunskaper om denna typ av system har en riskbedömning genomförts i detta projekt.
Information och kunskap har samlats in genom en studieresa till världens första anläggning
för direktreklamation av avloppsvatten för framställning av dricksvatten i Windhoek,
Namibia. Som framgår av EU-projektet TECHNEAU bör dricksvattenrisker inte endast
bedömas utifrån vattenkvalitet, men också utifrån distributions avbrott (kvantitet).
Systemgränserna har definierats på ett sådant sätt att det nya återvinningssystemet är i fokus
och risker som härrör från anläggningen kunde identifieras. Initierande faror identifierades
med hjälp av en databas som utvecklats inom TECHNEAU, och en tidig version av en
databas från Sydafrikas Water Research Commission. Databaserna var ett bra verktyg, men
farorna är specificerade för allmänt för att användas utan modifiering.
Riskanalysen som utfördes gjordes med hjälp av risk matriser och genom att använda ALARP
för att definiera risknivåer. Ursprungligen identifierades 70 initierande faror som potentiella
risker för systemet. Fem risker identifierades sedan som kritiska risker, varav en berörde
kvalitet och fyra kvantitet. Den viktigaste barriär som används i Beaufort West är omvänd
osmos, som har hög reningseffektivitet med mycket få patogener som kan passera. Omvänd
osmos är främsta anledning till att färre kvalitetsrelaterade risker har identifierats jämförts
med kvantitetsrelaterade risker. Genom användning av multikriterieanalys rankades
föreslagna riskreducerande åtgärder efter kostnader och minskad risk, både kvantitativt och
kvalitativt.
Nyckelord: Riskanalys, MCDA, Reklamationsanläggning, Vattenbrist, Sydafrika, Beaufort
West
II |
Chalmers University of Technology | Preface and Acknowledgements
This master’s thesis was performed during spring 2011 at Chalmers University of Technology
in cooperation with Beaufort West Municipality SA, Chris Swartz Water Utilization
Engineers and Pierre Marais Water and Wastewater Engineering (WWE). The risk analysis
and gathering of data was performed during a 9 week study visit financed by SIDA – Minor
Field Study and Chalmers Vänner.
We would like to especially express our gratitude’s to Mr. Chris Swartz who arranged our
time in South Africa and made it to a great experience and a lifelong memory, but also gave
good support and directed our analysis. At Chalmers we would like to thank Ass. Pr. Thomas
Pettersson who made the project possible from the beginning and helped us compiling the
thesis and Andreas Lindhe who supported our work, especially with the MCDA. Further we
would also like to thank Linda van Zyl and Magda Pretorius in Mossel Bay for good coffee
and for sorting out grammatical issues.
We would also like to thank the following people in South Africa and Namibia for helping
with the risk analysis: Christopher Wright, Pierre Marais, Jurgen Menge, Shawn Chaney,
John Esterhuizen, Truddy Theron-Beukes and the staff at the reclamation and wastewater
plant.
Göteborg June 2011
Olle Ivarsson and Andreas Olander
V |
Chalmers University of Technology | Notations
There are several different frameworks and national guidelines in the field of risk
management that has lead to confusion regarding how some of the terms and definitions
should be interpreted. This report will use the same terminology that is used in the
TECHNEAU project, presented in the report Generic Framework and Methods for Integrated
Risk Management in Water Safety Plans (Rosén, L. et al., 2007), and based on IEC (1995).
Below definitions and common abbreviations are presented.
Term Explanation
Backyard dwellers People that due to e.g. poverty, unemployment or backlog of
houses lives abnormally many in the same household.
Basic sanitation service Basic sanitation facilities that is easy accessible for the
household. The facilities should be operated in a sustainable way
and waste/wastewater should be removed in a safe way.
Basic water service In case of:
Communal water points, i.e. shared tap between
households, 25 l/day of drinking water per supplied
person with a flow of 10 l/min within 200 m of the
household; or
Formal connection, i.e. house or yard connection, 6000
liters of drinking water per month
Further these quantities need to be supplied 350 days per year
and with no more than 48h consecutive interruptions each time.
Also basic sanitation service may be includes in the definition.
Hazardous agent A biological, chemical, physical or radiological agent that
potentially may cause harm.
Hazardous event An event, source or situation, which can cause harm.
Informal settlement Poorer housing area with lack of access to basic water and
electricity service often constructed on government ground
without authorization and consisting of simple constructed
dwellings built of, e.g. plywood, corrugated metal etc. Also
referred to as shantytowns.
Risk A combination of the probability of occurrence and the
consequence of a specified hazardous event.
Water Board A state owned organization/entities that operate and handle
dams, wastewater systems, water supply infrastructure etc. Their
task is to work as water utilities and, in cooperation with WSAs,
provide people with basic water service.
Water Service Provider Nongovernmental organizations, private companies or water
boards that provide drinking water and/or sanitation service with
permission from the WSA responsible for the area of
jurisdiction.
Water Service Authority A metropolitan municipality, district municipality or authorized
VI |
Chalmers University of Technology | 1 Introduction
An ongoing global warming is today a fact for most people and the discussion has lately more
being diverted into consequences, responsibilities and how to reduce emissions of greenhouse
gases. Already now are consequences noticeable across the planet by increasing floods in one
end and drought in another. Where water is already scarce, less precipitation in combination
with increasing temperatures and growing urbanization causes major issues for any country
(WHO, 2010). A lack of water to meet the daily demands, i.e. water scarcity, is today a fact
for one out of three people in the world (UN, 2010a).
South Africa suffers from water scarcity in several regions around the country and almost all
available freshwater resources are fully utilized and under stress. According to Department of
Water Affairs and Forestry (DWAF) et al. (1999) only 8.6% of the precipitation is available
as surface water, mainly due to evaporation, which gives one of the lowest precipitation to
surface water conversion ratios in the world. Further also pollution of ground- and surface
water is indicated as a major threat towards South Africa’s raw water sources, where mining
industries has a big proportion of the responsibility. Like the general trend in the world, South
Africans are leaving the countryside and moving towards the bigger cities in search for better
economic conditions, consequently resulting in more people on a smaller area further
stressing the available raw water sources.
In Beaufort West, located in the Western Cape, a severe drought nearly emptied the town’s
raw water sources, resulting in an immediate lack of drinking water. The town was in, January
2011, relying on trucks delivering additional drinking water to support its inhabitants.
Frequent droughts in combination with predicted population growth and large informal
housing areas that needs to be connected to the water supply system, will increase the
pressure on the raw water sources even further in future. According to WHO (2010) water
scarcity is also directly connected to socio-economical impacts, which to some extent is
reflected in Beaufort West’s welfare statistics (BWM, 2010a).
The current situation in Beaufort West has lead to the construction of a direct Wastewater
Reclamation Plant (WRP) producing drinking water. The plant functions as an addition to the
existing water production system and will increase the drinking water production and reduce
pressure on the existing raw water sources. Thereby the community shall be better prepared
for future droughts and make it possible to supply the future growing population with
drinking water that fulfills quantity and quality standards. This is the first direct WRP that
produces drinking water in South Africa, second in the world after New Goreangab,
Windhoek. See thesis Microbiological Risk Assessment of New Goreangab Water
Reclamation Plant in Windhoek, Namibia (Ander & Forss, 2011) that was conducted during
the same period as this thesis for more information about reclamation in Windhoek.
Due to the widespread water scarcity in South Africa, WRP’s are considered in several other
South African towns why there is a high interest on the project within the water sector1
(DWAF et al., 1999). This type of drinking water plant put higher demands on the treatment
process since the raw water contains more pathogens than conventional raw water sources.
Due to high pathogen load and often complex multi-barrier approaches, higher risk is
connected to reclamation systems which substantiate the need for a comprehensive risk
assessment.
1 Professional Engineer Chris Swartz, Water Utilization Engineers, 2011-04-20 (Personal communication)
1 |
Chalmers University of Technology | 1.1 Aim
The overall aim of this project is to perform a risk assessment case study that identifies and
quantifies risks, concerning drinking water quantity and quality, related to the new
reclamation system in Beaufort West. For the most severe identified risks improvements will
be suggested to reduce risks to an acceptable level. The most important objectives of the
project are to:
1. Identify hazards threatening water quantity and/or quality within defined system
boundaries.
2. Estimate risk levels connected to the identified hazards, by assessing the probability
and consequence of each hazard.
3. Define tolerability criteria.
4. Rank the identified risks and decide if they are tolerable or not.
5. Suggest and evaluate risk reduction measures for unacceptable risks.
Further the aim is to provide an example of how a risk assessment for a reclamation system
can be conducted according to the TECHNEAU Risk Management framework. TECHNEAU
Hazard Database (THDB) does not include wastewater as a raw water source, why this will be
accounted for and further developed. The case study is also supposed to serve as a foundation
for the continuing development of the Water Reclamation Plant (WRP) and to be included in
Beaufort West’s next water safety plan (WSP).
1.2 Problem Definition
Reclamation systems tend to be complex since they typically use several barriers that are
technically advanced. Due to lack of experience regarding reclamation systems in South
Africa, and high pathogen concentration in the raw water from the WWTP, higher risks are
connected to reclamation systems than conventional drinking water production. Therefore a
comprehensive risk assessment is required.
Furthermore this type of systems is expected to be more common in South Africa as well as
other countries suffering from water scarcity. More knowledge in the field is therefore crucial
for a successful continuing progress and development.
1.3 Method
The risk assessment will be performed according to the general framework of risk
management developed by TECHNEAU (2007). Hazards will be identified by the use of the
THDB in combination with a hazard spreadsheet developed by South Africa’s Water
Research Commission (WRC). The spreadsheet will be used during discussions with South
African water experts, treatment plant operators, politicians, consultants etc. Risk matrices,
with focus on water quality and quantity consequences, will be used to estimate the connected
risk levels and a risk tolerability decisions will be evaluated according to the principle As
Low As Reasonably Practical (ALARP).
Risk reduction measures will be suggested for the most severe risks and ranked by the use of
Multi Criteria Decision Analysis (MCDA), developed within TECHNEAU at Chalmers
University of Technology (Lindhe et al., 2010).
Literature studies will be done to gather new information in the field and to investigate arisen
questions.
2 |
Chalmers University of Technology | A three-day study visit to Windhoek’s reclamation system will be done to gather information,
and discuss general problems, connected to reclamation systems. A one-day study visit and
seminar to a new constructed desalination plant and an indirect wastewater reclamation
system in Mossel Bay, South Africa were also part of the project.
1.4 Delimitations
The case study is limited to assess risks connected exclusively to the reclamation system,
providing a general overview of risks that will constitute a basis for more comprehensive
studies. The system boundaries, see Chapter 10.1.1, are defined as the water inlet of the
Wastewater Treatment Plant (WWTP), through the new WRP, and to the blending point with
drinking water from the conventional system.
Due to the defined system boundary interactions, or dependencies, with the conventional
water treatment system may occur that is not illustrated or evaluated in this case study. In
future the complete system should be considered in WSP, including an updated version of this
risk assessment.
In the risk assessment the rapid sand filter and the UV/H O were not assessed due to time
2 2
restraints.
3 |
Chalmers University of Technology | 2 The General Risk Management Process
The main purpose of the risk management process is to ensure that people, the environment
and assets are not exposed to unacceptable risks, by balancing the risk reducing cost against
the cost of the consequences originating from the risk generating activity (Grimwall et al.,
2010). The interpretation of the term risk differs from person to person and their exists
several different definitions in literature depending on if the focus of the risk is connected to
human health, the environment or technical problems (Lindhe, 2010). One of the more
widespread definitions of risk is that it is a combination of the probability and the
consequence of an undesired event, i.e. a hazardous event. Kaplan and Garrick (1981) state
that the term “risk” can be decomposed into three questions (also discussed by IEC, 1995;
Grimwall et al., 2010):
1. What can happen? (i.e. what can go wrong?)
2. How likely is it?
3. What are the consequences?
Further IEC (1995) state that the objective of the overall process of risk management is to:
control, prevent or reduce loss of life, illness, injury, damage to property and consequential
loss, and environmental impact. Grimvall et al. (2010) etc. emphasize that risk management
also involves an appropriate balance between realizing opportunities for gain/profit and
minimizing losses. So an efficient risk management can create opportunities by analyzing
risks and reaching a deeper understanding of the situation, which can result in possibilities to
mitigate or control the risk and consequently facilitate new projects. The process of risk
management according to IEC (1995) (Figure 2.1) is often referred to when risk management
is described.
Figure 2.1 Risk management according to IEC (1995).
4 |
Chalmers University of Technology | The last step of the risk management process, risk reduction/control, includes the
implementation of possible risk reduction measures, which necessitates the involvement of
decision makers, e.g. an agency or a political body. This step is however excluded in the case
study performed in this report, see chapter 10, since the result from the report is planned to
serve as additional information base for Beaufort West’s WSP and not to take any final
decision about implementations of risk reduction measures. If only the two first steps of the
risk management process are performed, risk analysis and risk evaluation, the process is
usually referred to as risk assessment.
In every project stakeholders are involved in different ways and extent. The ideal stakeholders
are the decision-makers, cost-bearers / benefit receivers and the risk-takers (Grimwall et al.,
2010). In a typical project those exposed to risks are not necessarily those benefiting from the
activities causing them and the decisions makers may not be directly affected by the negative
consequences of the risk or the economic consequences of the decision. Consequently, it is
important to involve participants from all sides since there interest areas overlap (Figure 2.2).
It is crucial to firmly establish what risk levels that are acceptable or not and to have a
transparent process and communicate which principles that are applied among the
stakeholders.
Figure 2.2 Conceptual model showing the overlapping interest areas of stakeholders involved in the
risk management process (Modified from Grimwall, 1998).
2.1 Risk Analysis
The main purpose of the risk analysis is to gather information and knowledge about risk levels
to support decision-making. Risk analysis, as well as risk management, is an iterative process
and should be updated as new information becomes available or as surroundings change. Risk
analysis should be performed in a structured order, where the main steps are as follows (e. g.
Grimwall et al., 2010; IEC, 1995):
5 |
Chalmers University of Technology | 1. Define the scope
2. Threat and hazard identification
3. Estimation of risk
The scope includes the goal and vision with the risk analysis. The system boundaries and sub-
systems that are considered are also included. How the system boundaries are defined have
big impacts on the final risk since interactions between components (chain of events) are
common and not always easy to overlook. It is also of importance to communicate the scope
with stakeholders from all areas (Figure 2.2).
The hazard identification can be based on experience, brainstorming, checklists e.g.
TECHNEAU Hazard Database (THDB), but also by more systematic processes such as What
if analysis and Hazard and Operability analysis (HAZOP) (Rosén et al., 2007). Stakeholders
have a vital role to play in the hazard identification and it is important to have relevant people
participating in the process. In general, threats and hazards can be classified in different ways
e.g. cause-, consequence- or resource related (Grimwall et al., 2010).
Risk estimations can be performed quantitative, semi-quantitative or qualitative. Quantitative
methods generally describe risk in numbers and qualitative methods describe them by words.
The quantitative method generally requires more data and is therefore not always a possible
option. Semi-quantitative methods are based on qualitative data where probabilities and
consequences are assigned numerical values to illustrate their importance/significance. One
common risk estimation method, either quantitative or semi-qualitative, is risk ranking with
the use of a risk matrix. The risk matrix method will be used in the case study in this report
and is explained further in chapter 4.2.
When estimating risk levels connected to hazards, consequences and corresponding
probabilities should be described. There is however uncertainties connected to the estimation
of both parameters. Uncertainties connected to the estimation of the probability are generally
more difficult to assess, compared to the estimation of the consequence (Grimwall et al.,
2010). There exist different techniques, with different level of complexity, to handle
uncertainties connected to the estimation of probabilities. Which technique that is appropriate
varies with the available data and which process that is considered. A general categorization
of the most common techniques, used for the estimation of probability, is presented in Figure
2.3. The case study in this project will use techniques from the lowest step.
Figure 2.3 Different techniques used for the estimation of probability, depending on the quality of
available data (Grimwall et al., 2010).
6 |
Chalmers University of Technology | 2.2 Risk Evaluation
When evaluating the risk, the intention is to conclude whether a risk is acceptable or not, i.e. a
risk tolerability decision. If the initial risk is considered too high, risk reduction measures
needs to be implemented to lower or control the risk. If a risk is decided to be acceptable it is
not always necessary to reduce the risk, it may be enough to control it. As stated by IEC
(1995) the risk evaluation consists of two parts:
1. Risk tolerability decisions
2. Analysis of options
One method that is used in the risk tolerability decision part is risk ranking. By the use of a
semi-quantitative risk matrix are all identified hazards ranked by their risk level, and the
ALARP principle can be used to conclude if the risk levels are tolerable or not. For the risks
decided not tolerable risk reduction measures are proposed. By using a Multi Criteria
Decision Analysis (MCDA) the options are ranked and a plan that suggests which risk
reduction measures that is most efficient to implement from a set of given criteria. For further
explanations see chapter 4.
2.3 Risk Reduction/control
The result from the risk assessment is presented in a report where estimated risk levels, and
also often suggested risk reduction measures are presented. In the risk reduction/control step a
decision should be made how to proceed with the risk reduction or if the risk is decided
acceptable, how it should be controlled. This decision is often taken by a different part then
those conducting the risk assessment. Therefore it is vital that the risk assessment process is
transparent and understandable to the decision maker. The final result from the risk
reduction/control should be presented in a report that more specifically includes:
If there are any risks that are decided unacceptable and needs to be reduced.
If there are any risks that are decided acceptable, but needs to be controlled.
How and which risk reduction measures, connected to the unacceptable risks, that
should be implemented.
How risks decided acceptable should be controlled and monitored.
How the future development of the risks should be monitored.
7 |
Chalmers University of Technology | 3 TECHNEAU
TECHNEAU started as a project, funded by the European Commission, to challenge
traditional drinking water treatment and to address future demands by the development of new
techniques and monitoring systems for safe drinking water (TECHNEAU, 2011a). The project
constituted of eight activity work areas (WA) (Figure 3.1).
Figure 3.1 Conceptual model of the TECHNEAU project, presenting all the eight different work areas
(TECHNEAU, 2010a).
3.1 Risk Assessment within TECHNEAU
WA 4 was focused on the development of a comprehensive decision support framework for
risk assessment. A framework designed to facilitate cost effective risk management for safe
and sustainable drinking water production – from a source to tap perspective (TECHNEAU,
2010a). TECHNEAU developed risk assessment further, based on the accepted generic
framework for risk management developed by IEC (1995) and the concept of Water Safety
Plans, WSP developed by WHO (2005). One important part was to put higher focus on water
quantity related risks in water safety plans. Before TECHNEAU started risks were commonly
analyzed from a quality perspective only, as suggested by WHO (2005).
Lindhe (2010) explained the relationship between quantity and quality failure connected to
supply failure by a conceptual model (Figure 3.2). Hazards are initiated by a supply failure,
which can be further categorized into quantitative supply failure or qualitative supply failure.
Quantity failure can occur by either failure of components in the system or by events leading
to unacceptable water quality causing a production stop. Quality failure is when unacceptable
water is delivered and either is detected, but no action is taken or cannot be taken, or quality
failure is not detected why not action can be taken.
8 |
Chalmers University of Technology | 4 Methods
In this chapter different methods and techniques, connected to risk assessment and risk
management, used in this project will be explained. The techniques are further explained and
implemented in chapter 10.
4.1 Hazard Identification - Bottom-up and Top-down
According to Beuken et al. (2008) there are two main approaches for hazard identification,
the bottom-up approach and the top-down approach. The simplest and most used approach is
bottom-up, using experience and knowledge from personnel involved in the process operation
to identify hazards. The hazard identification in the top-down approach categorizes hazards
into subsystems to facilitate from where the hazards originate. Connected to the subsystems
are then hazard checklists that are used to identify hazards that are relevant to the assessed
system. Advantages with this approach are that a more extensive hazard list often is created,
compared to a bottom-up approach that often only identifies well-known hazards. However, a
combination of both methods is suggested to identify as many hazards as possible.
Two examples of top-down approaches are the TECHNEAU Hazard Database (THDB) and a
spreadsheet developed by the South African Water Research Commission (WRC). The THDB
provides a database of technical, environmental and human hazards connected to water supply
systems with a source to tap perspective. The water supply system is divided into 12 sub-
systems (Figure 4.1). Also hazards that may pose a threat in the future are considered in the
data base, e.g. sabotage, terrorist attacks, emerging pathogens and climate change. The WRC
spreadsheet hazard identification list is so far only a draft version and it is not as extensive as
THDB. The spreadsheet developed by WRC also gives the possibility to estimate the
probability and consequence of the hazards which THDB does not.
Figure 4.1 The water supply system divided into 12 sub-systems in THDB, SW = surface water, GW =
ground water, IW = infiltration water (Beuken et al., 2008)
10 |
Chalmers University of Technology | The case study in this project, chapter 10, will use a bottom-up approach to involve operators,
decision makers and different stakeholders in combination with a top-down approach to cover
as many hazards as possible. The spreadsheet developed by WRC formed the base for the
hazard identification since it also gives the possibility to estimate risk level connected to the
identified hazards. The spreadsheet was complemented with risks from the more extensive
THDB, mainly from subsystem 6, 7, 8, 10, 11 and 12.
There was no subsystem connected to wastewater treatment, either in the THDB or the WRC
spreadsheet. The wastewater treatment is an essential part of the Beaufort West Reclamation
system, since it corresponds to the reclamation systems raw water source. Therefore the
subsystem had to be developed separately and added to the spreadsheet. The WRC
spreadsheet was only considering quality related risks compared to the THDB that also
considers quantity related risk. The spreadsheet was updated with the possibility to estimate
risks from both a quality and quantity perspective.
4.2 Risk Ranking
The aim with risk ranking is to establish the relative severity between identified risks. Risk
levels are estimated by categorizing each hazard, by corresponding probability and
consequence, defined in either words or numbers. Definitions by WHO (2005) of probability
and consequence are commonly referred to when considering water quality related risks
(Table 4.1). As suggested by TECHNEAU (2007), not only quality related risks but also
quantity related risks should be analyzed in the risk assessment. For quantity related risk
definitions, see chapter 10.1.3. The estimated risks are presented in a risk matrix, with
probability and consequence as axis, where the more severe risks are located in the upper
right corner (Figure 4.2).
Figure 4.2 Risk matrix with probability and consequence scales expressed in both numbers and text,
i.e. semi-quantitative.
Risk ranking is a common method to assess risks and the reason behind this is that it is easy to
perform, with relatively transparent results that are easy to communicate. Risk ranking does
however have several limitations. According to Lindhe (2010) hazards can have several
different possible outcomes, but this is not easily considered in a risk matrix since only one
consequence with a connected probability is illustrated for each hazard. There is no formal
procedure to consider and illustrate chain of events in a structured order in risk matrices.
Chain of events and interactions does however have big impacts on several of the estimated
11 |
Chalmers University of Technology | risks. For some risks to occur it is not enough that one process is malfunctioning but typically
a series, or chain of events, needs to take place before there is any real threat.
There is also no common procedure for uncertainty analysis in risk ranking.
Table 4.1 Definitions of probability and quality consequence/impact categories used in case study
(WHO, 2005).
Level Descriptor Description
Probability
1 Rare Once every 5 year
2 Unlikely Once per year
3 Moderately likely Once per month
4 Likely Once per week
5 Almost certain Once a day
Consequence
1 Insignificant No detectable impact.
2 Minor Minor aesthetic impact causing dissatisfaction but not
likely to lead to use of alternative less safe sources.
3 Moderate Major aesthetic impact possibly resulting in use of
alternative but unsafe water sources.
4 Major Morbidity expected from consuming water.
5 Catastrophic Mortality expected from consuming water.
To be able to present risk levels in a quantitative manner a risk priority number, R, is
commonly calculated. To calculate a risk priority number the consequence and probability
scales are assigned numbers. A risk priority number, R, can be calculated as,
R = Pa ∙Cb [1]
where P is the probability and C is the consequence. It is also possible to assign different
weights to the probability (a) and consequence (b), if they are considered to contribute
differently to the overall risk level. Consequently, by adding a weight to the scales, people’s
perception of risks may be taken into consideration. For example an unlikely accident with
expected catastrophic consequences, e.g. airplane crash, is often experienced as more severe
compared to a more frequent accident with expected less severe consequence, e.g. car crash;
even if, from a strictly statistical view, this is not correct. Several factors influence the risk
perception and this means that, within some categorizes, higher risks can be tolerated
compared to others, even if the risk itself is equally large. Relative differences in risk priority
number can be used to evaluate which risk reduction measures have the biggest effect. Further
12 |
Chalmers University of Technology | it is also possible, by using non-linear scales, to exaggerate the more severe risks, mainly to
benefit risk reduction of higher risks compared to lower.
In this case study the consequence scale is interpreted as more important than the probability.
The reason behind this is that some consequences normally never acceptable; so the
consequence should be premiered to decrease instead of the probability.
4.3 Customer Minutes Lost (CML)
Customer minutes lost is used to express the expected time that the average consumer is
affected by a failure, often expressed in minutes per year. This can either be connected to
water quality or quantity problems. When considering quality, CML is expressed as the
expected time that consumer is exposed to drinking water of inadequate quality. When
considering quantity, CML is expressed as the expected time the consumer is not supplied
with water (Lindhe, 2010). Consequently, CML can be used as a performance indicator to
indicate how robust a system is and as a quantitative measure to evaluate the relative severity
of risks against each other. The expected value of CML can be calculated as,
R (CML) =P ·C [2]
F A
where C is the proportion of consumers affected and P is the probability of failure, defined
A F
as the probability of a quantity failure multiplied with the corresponding consequence.
4.4 As Low As Reasonably Practicable (ALARP)
A common way to conclude whether a risk is acceptable or not is by applying a principle
named As Low As Reasonably Practicable (ALARP). It is used to evaluate the severity of
risks, i.e. if the risk level is acceptable or not. A risk can be judged unacceptable, see red field
in Figure 4.3, which means that all necessary measures must be taken to reduce or eliminate
the risk. Applied together with a risk matrix the unacceptable risks will be displayed in the red
field in the upper right corner. Risks can also be acceptable, meaning that no further action
needs to be taken and these are displayed in green in the lower left corner of the matrix. Risks
that fall between these areas are within the ALARP region. These risks may be acceptable if it
is economically and/or technically unreasonable to reduce them, i.e. risk levels should be
reduced to the lowest level reasonably possible.
Figure 4.3 ALARP levels implemented in a risk matrix (Modified from Melchers, 2001).
13 |
Chalmers University of Technology | The boundaries of the different ALARP levels are often decided through discussion with
experts, decision makers and other stakeholders. ALARP levels need to be decided, or at least
discussed independently for each new risk assessment project, since risks acceptable in one
context may be unacceptable in another.
4.5 Risk Reduction
Risks that were identified as unacceptable have to be lowered. Developing and applying risk-
reduction measures aims to reduce the risk to an acceptable level. Different measures may
reduce the risk to an acceptable level in different ways. Commonly the measures should be
cost effective, meaning that the measure reduces the risk to an acceptable level for the least
amount of money. Other criteria that measures are desired to fulfill may be acceptance among
the consumers or to have a persuasive affect or fulfilling environmental criteria.
Common ways to define risk-reduction criteria are for example expert judgment or
structured/non-structured brainstorming. Another option is a checklist of risk reduction
measures on common problems in water treatment systems developed by TECHNEAU
(2010b).
4.6 Multi Criteria Decision Analysis (MCDA)
MCDA is a structured and transparent method used to evaluate how well different
alternatives, e.g. risk reduction measure meet different criteria. If the problem is to decide
which car to buy, different criteria can be e.g. engine power, possible passengers, price, size
etc. These criteria are then used to evaluate which car that best suits the predetermined
demands. It is also possible to assign weights to the different criteria if they are judged to
have different impact on the final decision.
There are several MCDA methods available when evaluating risk reduction measures, but
they all have the same aim: to facilitate the decision making process when several alternatives
to reduce the risks are available. In the literature there exist other similar terms like: multi
criteria analysis (MCA) and multi-attribute decision analysis (MADA). These are however
methods used for the same purpose as the MCDA (Lindhe, 2010). In this report the term
MCDA is used to describe a method that evaluate and prioritize different risk reduction
alternatives according to how well they perform to a set of criteria.
From previous studies on MCDA methods related to drinking water supply (Hajkowicz and
Collins, 2007) it was concluded that there was a lack in handling risk and uncertainty in
MCDA models. Lindhe et al. (2010) remarked this and developed a new MCDA method that
considers uncertainties in a formalized manner. The MCDA model uses risk ranking (risk
matrix) as a basis with risk priority numbers to calculate the risk reduction of a measure.
Uncertainties in the estimation of risk reduction are considered with either discrete or beta
distributions. The discrete method assigns uncertainties to the input data, i.e. to the initially
estimated probability and consequence, resulting in that also the uncertainty concerning the
risk reduction can be calculated, while the beta method only assigns uncertainties to estimated
risk reduction.
The case study in this report has used the MCDA method developed within TECHNEAU at
Chalmers University of Technology (Lindhe et al., 2010) to rank suggested risk reducing
measures. Beta distributions are used to model uncertainties. The MCDA is evaluating risk
reduction measures from their cost of implementation and risk reduction potential. Further is
14 |
Chalmers University of Technology | 5 The Necessity of Water
Water is one of our main components for a societal growth and development. Historically
fresh water, and an early water management, has been one of the most important reasons for
civilizations to be able to prosper – but lack of water and overexploitation of fresh water
resources is also believed to have been the main reason for some of the major civilization
downfalls. The relation between accessible water and development is just as valid today (UN,
2010a).
5.1 Water Scarcity
Water scarcity evolves when the demand is higher than the supply. According to FAO (2007)
water scarcity is defined as the point at which the aggregate impact of all users affect on the
supply, or quality of water, under prevailing institutional arrangements to the extent that the
demand by all sectors, including the environment, cannot be satisfied fully. Water scarcity
does not only evolve where fresh water is limited, but also as a consequence of poor water
management. Shortage of water causes not only quantity problems, but often also a
degradation of the quality.
Water is essential for basic welfare and is necessary to sustain and maintain healthy
ecosystems. Furthermore it is a crucial ingredient for all socio-economic development. Good
sanitation and provision of water works as an engine for economic growth. A lack of water to
meet the daily demands, i.e. water scarcity, is today a fact for one out of three people over the
world (UN, 2010a) and one fifth of the world’s population has physical scarcity (FAO, 2007).
For the majority of countries with water scarcity, agriculture is the predominant consumer of
water. Historically, irrigated agriculture has played a major role for developing economies in
rural areas. At the same time these poorer communities have also often suffered from
inadequate water supply resulting in health issues. Due to inadequate health status they have
not been able to develop further, but instead been stuck in poverty and disease. In many semi-
arid regions, rural poor are seeing access to water for food production, livestock and domestic
purposes as more critical than access to primary health care and education. According to FAO
(2007) it is crucial that areas that suffer from water scarcity protect and focus on efficient use
of all water resources, as well as enhancing the water productivity of all sectors to sustain
their basic needs.
Groundwater has played a major role in arid regions for irrigation and domestic demands.
Because of a lack of adequate planning, legal frameworks and governance a new debate has
arisen regarding the sustainability of the use of extensive groundwater mining. Since the
extraction of groundwater has grown, about half of the wetlands have disappeared during the
20th century and this has lead to losses of eco services, bio-diversity and productivity of eco
systems (FAO, 2007).
According to the Millennium Development Goal, MDG, #7, the proportion of the population
without sustainable access to safe drinking water and basic sanitation will be halved 2015
(UN, 2010a). According to the latest report there is a progress in the supply of drinking water
but also rising threats in terms of urbanization, population growth and increase in demand
from households and industries. UN (2010a) further stresses the importance of a safe water
supply that remains a challenge due to expanded activities within agriculture and
manufacturing. This expansion has led to more pollutants being in circulation, and more
aquifers being polluted. FAO (2007) points out that water quality degradation can be a major
16 |
Chalmers University of Technology | cause of water scarcity. To cope with these challenges, tools need to be developed and
applied.
5.2 Water Condition in South Africa
As indicated by FAO (2007), South Africa is having acute water stress in several regions and
freshwater is indicated as their most limiting resource. Almost all available freshwater
resources are fully utilized and under stress. Further most of the rivers have been dammed and
50% of South Africa’s former wetlands have been converted for other purposes. There are
several reasons behind this. South Africa is a semi-arid country, which means that the
potential evaporation is larger than the potential precipitation. Only about 8.6% of the
precipitation is available as surface water, which is one of the world’s lowest conversion
ratios. This situation, as in many other arid countries, is expected to get worse with an
increasing population and increasing water quantity demand (Department of Environmental
Affairs, SA, 2009).
Pollution of surface and groundwater, as well as eutrophication is indicated as a major
concern. Furthermore, South Africa is may suffer severe consequences due to climate change,
especially the Western Cape. Regardless of any exact temperature increase, due to the
greenhouse effect, Western Cape can expect to have shorter periods of rain and increasing
evaporation (Department of Environmental Affairs, SA, 2009).
Water is indicated as a crucial element to battle poverty and will become a major restriction to
the future socio-economic development. South Africa is aware of the situation and there are
several ongoing projects to increase water quantities. In 2006 the MDG goal concerning
halving the proportion of the population without sustainable access to safe drinking water was
fulfilled. However the goal of providing basic sanitation is going slower (UN, 2010b).
A rapid and uncontrolled growth of informal urban settlements puts high stress on South
Africa’s, water supply system. It is not only problematic for the authorities to supply the
housing with infrastructure for drinking water and to handle sewage. It also constitutes an
increasing risk on raw water sources since the housings often are located near surface waters.
Numbers presented by UNESCO (2006) mention about approximately 5 million people living
in informal settlements in South Africa, a figure that certainly has increased since. The future
trends, that was expected to influence the drinking water supply in the southern part of Africa,
were presented during a workshop in Namibia in 2006 (Swartz, C.D & Offringa, G., 2006).
During the workshop it was predicted a fast and increasing population growth from today’s 48
million, which probably will lead to an increase in the number of informal housings and
increased problems related to drinking water. A growing middle-class also increases the
requirements on the quality of the water and demand (Swartz, C.D & Offringa, G., 2006).
In Beaufort West the demand for low cost housing has grown constantly over the last few
years and 1500 new houses were built 2004 - 2009 but still 3000 people are listed for houses.
Moving people from informal settlements into new houses means that in general there are
fewer people per tap and this has consequences on the quantity of water in terms of higher
demand (BWM, 2010a).
The poverty in the country is widespread. Over 34% of the population live on less than 2$ per
day, and 70% of them live in rural areas where the main raw water source is groundwater. The
groundwater sources represent, due to geological conditions, less than 10% of the available
water in the country and over 70% of the rural housings depend on it as its raw water source.
17 |
Chalmers University of Technology | With very modest amounts of precipitation and recharge of groundwater aquifers, it is a
riskful strategy to have so many people relying on groundwater as their main raw water
source. In future, major investments in infrastructure projects will be needed to be able to
comply with quality and quantity standards. (UNESCO, 2006)
5.3 Management and Sustainability
Water is a renewable resource and low quality water, such as wastewater, should whenever
possible, be considered as an alternative source for less restrictive use. The United Nations
Economic and Social Council provided a management policy in 1958: “No higher quality
water, unless there is a surplus of it, should be used for a purpose that can tolerate a lower
grade” (UN, 1958).
In the report by UNEP (1997) Water Pollution Control – A Guide to the use of Water Quality
Management Principles it is pointed out that the single most adequate approach for solving
the global problem of water shortage is to apply appropriate techniques for developing
alternative sources of water, together with improvements in the efficiency of water use and
with adequate control to reduce water consumption. Appropriate techniques can also be used
to reduce impacts and to relieve the pressure on already stressed natural water sources.
Membrane treatment and reverse osmosis, is used in large scale in the world today. Already
millions of people are relying on desalination for their daily demand of water and the trend is
that desalination systems will become increasingly common throughout the world (Tampa
Bay Water, 2010; Water-technology, 2011). In 2004 it was estimated that seawater
desalination capacity would increase by 101% by 2015. The latest reports are that this
prognosis will be vastly exceeded (WWF, 2007). Desalination and Water Reclamation Plant
(WRP) share many difficulties and treatment processes. They may provide solutions to water
scarcity for similar situations and in South Africa they are more frequently presented as
competing techniques. Treating wastewater into drinking water with a WRP costs about half
compared to using desalination2. Mainly due to lower pressure required in the reverse osmosis
process.
Membrane techniques are energy intensive and connected to serious greenhouse gas
emissions, but able to treat almost all types of water. They may divert focus from more
sustainable options and might be seen as an ultimate solution to water scarcity. WWF’s
(1997) view is that these techniques should only be used when there is a genuine need to
increase water supply and are the best and least damaging method of augmenting water
supply. Assess impacts and managing water demand of large scale engineering solutions is
needed in an early stage to avoid irreversible damage to nature. The preceding process before
deciding upon which solution that will be used should be transparent and exhaustive in which
all alternatives are properly considered and fairly judged in their environmental, economic
and social impacts. Better solutions in terms of costs and environment would be water
conservation, water use efficiency improvements and water recycling. Water recycling in this
context means using low quality water for suited purposes, like irrigation, flushing toilets etc.
(WWF, 1997).
Extensive treatment techniques are also expensive to construct, where the membranes often
corresponds to a significant part of the costs3. Due to the high costs these techniques are often
2 Cobus Oliver, Veolia Water South Africa, Engineering Manager, 2011-05-23 (PP presentation)
3 Contractor, Professional Engineer Pierre Marais (WWE), 2011-04-15 (Personal communication)
18 |
Chalmers University of Technology | found in areas that already are developed. When these techniques are used in less developed
countries in the world it may be problematic to allow poorer people access to the treated water
if the construction and operating costs will be covered by tariffs. All membrane treatment
techniques need to handle brine and backwash water. Backwash water often contain
chemicals that may be harmful for the environment if released untreated and the rejected
water or brine contains a high pathogen load and/or salt content due to changes in
concentrations.
South Africa’s economy is structured around large and energy intensive mining and primarily
minerals beneficiation industries. Only ten other countries have higher commercial primary
energy intensities, and South Africa is the 13th highest emitter of greenhouse gases (UNFCC,
2011). The primary energy source in South Africa is coal, followed by oil. The renewable
energy sources, in this case only hydropower amounts to 0.1% of the total energy production,
(Figure 5.1) (IEA, 2008). This means that the energy to supply treatment facilities of water
would consist almost exclusively of energy produced from fossil fuels.
Coal/peat Gas Oil Nuclear Hydro Comb. Renew & waste
0.1%
2.5% 10.4%
12.8%
3.1%
71.1%
Figure 5.1 Share of total energy supply in 2008 (IEA, 2008)
19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.