1. Introduction
Several studies have shown the efficacy of smart education. It is known to be convenient, efficient and sometimes, even cost-effective (Anurag et al., 2014). One great benefit of smart education is that it can be flexibly used in many different domains of education regardless of students’ attributes such as gender, age, or even learning background. Even though many studies have covered issues relating to the efficacy of smart education and its learning framework, there are still growing demands from both industry and academics to correlate the students learning process with smart education in their domains (Tore and Jon, 2018). As for the maritime sector, unfortunately, the implementation of smart learning, especially online learning, is quite limited because of special circumstances that maritime sector has (Boris, 2004). Despite the great advances made in E-navigation and its implementation in related infrastructures, vessels ranging from gigantic ocean-going ships to small coastal ships still have little access to the internet and are, thus, unable to connect to large data portals when the vessel sails from a port and or far from a coast. Therefore, the use of online learning platforms in the maritime sector is limited compared to other sectors. This research aims to develop a platform that is readily accessible to users in real-time to measure the impact of smart learning on enhancing the efficacy of Maritime English (ME) education.
The paper is divided into three parts. First, it reviews the previous studies of international regulations and guidelines related to ME and the current status of ME education in the Republic of Korea. With this information, the second part explains how the ME platform was designed to meet the needs of students. And the third part analyzes the results of the tests that measured the efficacy of using online platforms in ME education and the further possible research.
2. Design of a platform reflecting the need of current Maritime English education
2.1 Maritime English education standards set by international regulations and guidelines
There are Maritime English Model Course 3.17 from International Maritime Organization (IMO) which sets certain criteria on how (or what) to teach the students of Maritime English. This is considered to be the syllabus and each economy has structured their teaching curricula based on this model course. Besides the ME Model Course 3.17, one publication is wildly known for its importance for ME education called Standard Marine Communication Phrase (SMCP). If IMO model course is syllabus used for the course designers and teachers, SMCP is more like studying materials—a practical publication sets out a wide range of communication skill sets used onboard vessels.
These two publications are crucial not only from an academic perspective (designing courses and constructing teaching materials for use in ME classes) but also for cadets or seafarers who are ready to take up duty as new officers engaged in international voyages. This is because these contents are directly related to the international standards that apply to every aspect of the maritime sector, from the issuance of licenses to the practical use of English onboard.
Hence, these publications are taken into consideration when structuring courses in the Republic of Korea. There are 180 and 208 hours of English-related classes in Korea Maritime and Ocean University and Mokpo Maritime University respectively (Division of Navigation Science) (Korea Institute of Maritime and Fisheries Technology, 2018). Besides these universities, there are two maritime high schools (ME class hours, 544 and 459) and one vocational institution (ME class hours, 96 hours) that provide ME education and training. However, IMO Model Course 3.17 recommends 553 educational hours for deck officers. This means that students of maritime vocational institutions, high schools, and universities in Korea may need more hours of ME classes in general.
2.2 Lack of SMCP oriented education in maritime sector
In addition, there are need of educations for SMCP and maritime technical terminology. In 2010, a research was conducted in Antwalf University to know how well the English is being used in the working field and in what further improvement is needed for the better communication onboard (Lieve et al., 2010). The study was conducted with 127 maritime staff that included seagoing navigation officers and engineers. Almost 93 % of the respondents said that technical ME vocabulary is considered to be important and 81 % respondent said that the SMCP is considered to be important. This number is higher than the other aspects of the language such as pronunciation (74 %) or grammar (68 %) as shown in Fig. 1.
Moreover, the same study indicates that of the respondents, 90 % and 100 %, respectively, said that SMCP and Technical ME Vocabulary are considered important for recruitment and promotion. But as indicated in Fig. 2, when they were asked if they had any education in SMCP and technical ME vocabulary for the last 5 years, 54 % and 60 % of the respondents said that they rarely or never had that education, which highlights the need for learning and assessing material designed for English for Specific Purposes.
The need for education in SMCP and technical terms used aboard ships was revealed also by another study conducted in 2018, namely, Enhancing Onboard English Communication in the Republic of Korea (Korea Institute of Maritime and Fisheries Technology, 2018). With this, we could acknowledge that two factors, SMCP and technical terms related educations are needed and necessary in the maritime industry.
2.3 Design of Platform
The use of SMCP, along with maritime technical words, is crucial for the onboard tasks and duties, as mentioned before. But unfortunately, the previous research findings also suggest that two essential element of education (SMCP and maritime technical words) for seafarers lacking. With this in mind, maritime education and testing platform developed in this research was designed to reflect the need to cover the use of SMCP and maritime technical terms.
SMCP is divided into three segments. General provisions, Part A, and Part B. Part A is an indispensable part of any ME curriculums which are considered to be crucial whereas part B is not mandatory section but mariners are implicitly expected to know (IMO, 2015). The last part, glossary, explains frequently used vocabularies aboard ships. Given the importance of each segment, part A and the glossary, which are considered mandatory, have been covered when designing the ME platform.
The platform consist of three parts—translation, SMCP glossary, and general vocabulary. Fig. 3 shows where each section of the test questions are extracted from.
Table 1 shows the evaluation criteria (or elements) of each section of the test and with an additional note on how many questions were formed in that particular section of the test.
To measure the educational effectiveness of the platform in a clearer way, every phrases and words in the SMCP are used verbatim and no switching adding or paraphrasing were exerted when forming the test questions. Experts with a background of more than three years of ME education have participated in this process.
Also, to ensure fairness in the learning and evaluation processes, all the questions from SMCP were randomly selected from a pool of questions stored in the data bank. Thus, students get a different set of questions every time they login for the platform.
In addition to the random selection of questions, to maintain a certain level of difficulty in the objective form of testing, the questions are distributed into three categories according to the levels of difficulty—hard, medium, and easy—each of 20 % 60 % and 20 % respectively as presented in Table 2. In this process, the Korean system of seafarers’ license test was taken as a reference and consultancy was sought with a seafarers' testing examiner for the proper weightage of the difficulties.
In every part of the tests, to ensure accurate scoring of the answers, an elaborate scoring system is used instead of judging the answer as just right or wrong. For example, if the correct answer is “stand by all stations for leaving the port,” and the student’s answer is “all stations for leaving the port,” the student will get 86 points, not 0.
Also, hints are given in the second and third part of the test. This will enable students to know how many characters to be filled for the answers.
Once the student finishes given test and clicks on the final submission button, their submitted answers are displayed. The answers are not scored as right or wrong but on a scale of 0 to 100. Partial points are given if the words are misspelled.
Finally, the students’ scores are recorded in the server’s database, and the Structured Query Language (SQL) is used to manage these data. The administrators are enabled to check not only the submitted scores but also to see the students’ activity, such as the student general log. Fig. 4 outlines the process of taking one set of designed ME test.
2.4 Scoring Process
For the scoring processes, similar_text function is used. It receives two strings as the parameters and calculates the similarity of the two strings character by character (Khirulnizam, 2007). The required similarity measure is equal to the sum of the number of characters in common determined for each of the substrings.
-
(1) In other words, an algorithm finds the longest string in common and applies the process recursively to the substrings on the left and right of the longest common string (Ian, 1993).
Using this function, the similarity is calculated and presented in percentage. For example, if the answer is 'story' and the student typed it as 'store', then there are 8 common characters in a total of 10 characters. Then the similarity is 80 %. In this way, the student could earn a partial score for a partly correct answer and this would also facilitate the reviewing process for student to know where to make up for a better score in the next round of tests.
3. Analysis of the result
3.1 Methodology
The experiment was conducted on students who had joined onboard training during their vocational courses. In October 2019, there were a total of 30 students (26 male, 4 female) pursuing the 3rd class navigation officer vocational course at Korea Institute of Maritime and Fisheries Technology. They all held bachelor's degree or equivalent as a minimum qualification for the course.
At first, the students were asked if they would want to participate in a program that uses the online platform for the given experiment time. At this stage, 15 students showed their intention to participate in using the online platform voluntarily. They were told to use the platform as much as they wanted. At the same time, they were also told to take the ME test at least once a week. They were labeled as the participating group. The other group, on the other hand, did not use the online platform and were labeled as the non-participating group.
Although the experiment started with 30 students, the number dropped to 27 a month later due to early employment or other personal reasons. Thus, this experiment analyzed the data from 27 students, 14 students from the participating group and 13 students from the non-participating group respectively.
To accurately gauge the students’ Maritime English level, the same test was given to both groups at the beginning of the experiment (initial test) and just before the end of the experiment (final test). The entire experiment spanned over six weeks, from early October to mid-November 2019. As a result, a total of 148 tests, corresponding to around 3,000 questions and 444 sectional tests results, were gathered.
3.2 Score comparison between the groups
To identify any significant differences between the groups of variables, hypotheses were set up for both independent and dependent t-tests. The study was tested at the 0.05 level; thus, if the p-value was below the level of 0.05, the null hypothesis was rejected and the groups were determined to have significant differences. For this process, T value was also calculated to show the difference represented in the units of standard error.
The ME tests consist of three parts: translation questions, SMCP glossaries and vocabularies extracted from SMCP Part A. The translation part has seven questions, SMCP glossary has six questions and SMCP Part A vocabulary has seven questions. Each question accounts for 100 points, thus, the full marks for one set of ME test is 2,000.
Table 3 presents the comparison of the scores of the initial test between two groups. The participating group showed a higher score than the non-participating group by 75 points in the initial test but this score gap widened to 250 points in the final test.
To analyze the data statistically, Levene’s test was used to determine the homogeneity of variance between groups, and the result showed that these two groups were assumed to have equal variances (F = .221, p = .642). The test result showed no significant mean score differences between the two groups at the beginning of the experiment. However as indicated in Table 4, after six weeks, there was a significant score difference between these groups.
3.3 Score comparison in each section of the test
Table 5 shows score comparison of each section of the test and the result of independent t-test conducted to identify any significant difference that exist between the two groups.
There were improvements in the test scores in each part of the test, except the non-participant's translation part. The translation part, in general, has shown little improvement compared to the other parts of the tests. This might be attributed to the fact that while the vocabulary questions require the students to type only a few words, translation questions require them to structure whole sentences, which takes much greater learning efforts.
Moreover, independent T-test result shows that at the initial stage, no significant difference could be seen in all three parts of the tests (.05 < p), but after six (6) weeks, significant difference arises from the translation section and vocabulary section of SMCP Part A (p < .05). However, as for glossary section, since the scores from the two groups increased in tandem, no significance was found in the t-test results (.05 < p).
3.4 Analysis on efficacy of using ME platform
Fig. 5 depicts the average score distribution of the participant group for a 36-day period, which contains information of more than 92 sets of practice tests with around 2,000 questions from 14 students. The trend line shows that the scores increase with time.
The average score of practice tests during this period was 1567.3 points, which, in general, is higher than the average initial or final tests score depicted in Fig. 6. This might be owing to the fact that the initial and final tests were conducted under the discretion of supervisors with a strict time limit of 15 minutes, whereas the practice tests were freely taken by students for learning and evaluating purposes.
In addition, to analyze the efficacy of using ME platform, a survey was conducted to find out how much time students had spent either on the ME platform or on learning general English. It was found that the participating group, on an average, has spent 3.25 hours (195 minutes) of their time on general English learning and 0.34 hours (20.5 minutes) on using ME platform on a weekly basis. The non-participating group, on the other hand, has spent 4.3 hours (258 minutes) a week on general English learning.
Fig. 6 depicts how students average scores have changed between initial test and final test. The participating group increased their score by 347.37 points in total, an average score change of 9.6 points on a daily basis, whereas the non-participating group resulted in an increase of 172.49 points in total, a 4.79 points change on a daily basis.
Since the participating group has spent 0.5 hours (30.7 minutes) on studying general English with 9.6 points increase a day, one minute of English learning attributed to a 0.313 point increase for the participating group. With using the same method, it was calculated that one minute of English studying was attributed to a 0.130 point increase for the non-participating group.
To understand how much ME platform usage has an impact on the students, their efficacy of learning was further analyzed based on the non-participant group's score changes. A minute of learning had led to 0.130 score increase for non-participants, which means that among 347.37 points improvement from the participant group, 130.37 points were attributed from studying general English and 216.99 from using ME platform. In other words, a minute of using ME platform for learning and testing purposes has led to an increase of 2.06 points in the test score.
Additionally, a consultancy was sought to an ME experts who have more than three years of ME teaching background and a total of four first officers that are tasked for onboard duties as shown in Table 6 and Table 7. The initial and final tests were given to them, and they were asked what according to them, would be the reasonable minimum score required to qualify to be junior officer and senior officers onboard. This is to get a consultancy on the ME test score differences that may exist between junior officers and senior officers owning to their expertise and past onboard experiences.
The consultancy results indicated that on an average, 1,133.5 points were proper minimum scores for new junior officers and 1,725 points were proper minimum scores for senior officers onboard.
The initial test score of the low-scoring 20 % of the students was 641 on an average, and according to the consultancy, there is a gap of 492.5 points in their score to meet the surveyed score as appropriate to be a junior officer. Based on the previous findings, the study has calculated that on a weekly basis, 47 minutes of ME platform usage is required for the enhancement of 492.5 points in five-week courses. This means that if the students take 80 questions, 4 sets of tests a week for a period of five weeks, they will probably answer four or five more questions correctly than they did initially among a total of 20 questions.
An average initial test score of all students was 1,154, which has 571 points difference from the minimum score of 1,725 points, according to the survey, to be qualified as a senior officer. By applying the same method as in the previous paragraph, students would need 53 minutes of using ME platform per week for five weeks in order to achieve the 1,725 points.
4. Conclusion
Various attempts were made to establish an online education and testing platform to gauge the students’ learning efficacy. While establishing this, the following methods were adopted for the process of preparing the ME test questions.
The first is a random selection from the query pool; the second is rational division, classification, and distribution of questionnaires depending on their difficulty level; the third is the provision of various types of hints; the last is the detailed scoring system for answers written in long sentences. Through this, the basic platform, which could utilize smart devices for measuring the efficacy of smart learning in the ME domain, is laid out.
With these designs and features of the ME platform, six weeks of the experiment was conducted. It was found that using the platform has brought a significant effect on the students’ test scores. It showed higher efficacy in the areas of vocabulary and glossary knowledge than those of translation or sentence-based questions.
The score enhancements were further analyzed to quantify the effect of using ME platform by incorporating the student ME test score. Consultancy was sought for this process and based on previous findings, the amount of required time expected from the students to achieve certain level of scores was explained.
This research aimed at analyzing students’ self-efficacy by introducing a learning method and a testing platform that can be accessed in real time. Considering the recent growing efforts to incorporate online platforms in different areas of expertise, it is believed that this study could be a reference in laying the groundwork for online ME education and test systems in the future.
However, due to the limited number of candidates available on the training vessel and the short experiment time, there were difficulties while analyzing the data statistically. Moreover, tools that can record or survey on students' usage of an online platform are required to gauge the students’ self-efficacy more accurately and in a meaningful way. Thus, further research should be conducted to supplement these shortcomings to accurately assess the efficacy of using the ME platform.
Since wireless networks are becoming increasingly crucial in the maritime industry and their coverage is becoming wider, it is imperative that similar studies be conducted. Furthermore, we hope that this platform will be developed in future research to accommodate the growing need for boosting students’ learning processes.