Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1229-3431(Print)
ISSN : 2287-3341(Online)
Journal of the Korean Society of Marine Environment and Safety Vol.29 No.7 pp.930-938
DOI : https://doi.org/10.7837/kosomes.2023.29.7.930

Study on Improving Maritime English Proficiency Through the Use of a Maritime English Platform

Jin Ki Seor*, Young-soo Park**, Dongsu Shin***, Dae Won Kim**
*Full-Time Lecturer, Korea Institute Of Maritime and Fisheries Technology, 367 Haeyang-ro, Yeongdo-gu, Korea
**Professor, Division of Navigation Convergence Studies, Korea Maritime and Ocean University, 727 Taejong-ro, Yeongdo-gu, Korea
***Professor Korea Institute Of Maritime and Fisheries Technology, 367 Haeyang-ro, Yeongdo-gu, Korea

* First Author : jinkiseor@gmail.com


Corresponding Author : youngsoo@kmou.ac.kr, 051-410-5085
November 27, 2023 December 27, 2023 December 29, 2023

Abstract


Maritime English is a specialized language system designed for ship operations, maritime safety, and external and internal communication onboard. According to the International Maritime Organization’s (IMO) International Convention on Standards of Training, Certification and Watchkeeping for Seafarers (STCW), it is imperative that navigational officers engaged in international voyages have a thorough understanding of Maritime English including the use of Standard Marine Communication Phrases (SMCP). This study measured students’ proficiency in Maritime English using a learning and testing platform that includes voice recognition, translation, and word entry tasks to evaluate the resulting improvement in Maritime English exam scores. Furthermore, the study aimed to investigate the level of platform use needed for cadets to qualify as junior navigators. The experiment began by examining the correlation between students’ overall English skills and their proficiency in SMCP through an initial test, followed by the evaluation of improvements in their scores and changes in exam duration during the mid-term and final exams. The initial test revealed a significant difference in Maritime English test scores among groups based on individual factors, such as TOEIC scores and self-assessment of English ability, and both the mid-term and final tests confirmed substantial score improvements for the group using the platform. This study confirmed the efficacy of a learning platform that could be extensively applied in maritime education and potentially expanded beyond the scope of Maritime English education in the future.



해사영어 플랫폼을 활용한 표준해사영어 실력 향상에 관한 연구

설진기*, 박영수**, 신동수***, 김대원**
*한국해양수산연수원 전임강사
**한국해양대학교 항해융합학부 교수
***한국해양수산연수원 교수

초록


해사영어는 선박 운항, 해양 안전, 선내 의사소통 및 선외 교신을 위해 설계된 특수한 영어 언어체계이다. 국제해사기구 STCW(선원의 훈련, 자격증명 및 당직근무의 기준에 관한 국제협약)에 따르면 국제항해에 종사하는 항해사가 되기 위해서는 SMCP를 포함한 해사영어 대한 충분한 이해가 수반되어야 한다. 본 연구는 음성인식, 번역, 단어 기입 등 유형의 해사영어시험을 통하여 학생들 의 해사영어 활용 능력을 측정하고 플랫폼 사용에 따른 시험 점수 향상 정도, 나아가 초임항해사로 나가기 위하여 요구되는 해사영어 시험 플랫폼 활용 시간 등을 조사하고자 하였다. 실험은 먼저 초기 시험을 통해 학생들의 일반영어능력과 SMCP 활용 능력에 대한 연 관성을 조사한 후, 중간 시험 및 최종 시험을 통해 플랫폼 활용에 대한 점수 향상 정도, 응시시간 변화 등 요인을 측정하였다. 초기 시 험을 통해 개인 요인(예: 토익 점수, 본인 스스로에 대한 영어능력 평가)에 따른 그룹 간 해사영어시험 점수에 유의한 차이가 있음을 확인하였으며, 중간시험 및 최종시험을 통해 플랫폼 활용이 유의한 시험점수 향상으로 이어졌음을 확인하였다. 해당 연구는 해사 교육 분야에 다양하게 적용할 수 있는 학습 플랫폼 활용 효능을 조사하였으며 향후 해사영어 교육 외 그 범위를 넓혀 활용될 수 있을 것으 로 사료된다.



    1. Introduction

    With the new technological advancements, the concept of education is rapidly evolving (Gracia et al., 2022). Digitalization in education domain can be referred to as process of utilizing digital technologies to transform and enhance various aspects of learning activities (UNESCO, 2022). Internationally recognized organizations such as the International Maritime Organization (IMO) have emphasized the significance of digitalization in their respective domains. For example, IMO, in collaboration with the World Maritime University (WMU), developed e-learning courses aimed at strengthening the implementation of IMO rules and regulations. Similarly, the International Labour Organization, has launched numerous labor-centric educational and capacity-building programs, offering a mobile-friendly, simulated online learning environment.

    In line with these trends, there is a growing effort to integrate new technologies into the educational domain within the maritime sector (Reza et al., 2017). Whether through e-learning, augmented reality, or gadget-incorporated instruction, education is on a gradual shift toward enhanced learning methodologies (Chaudhary et al., 2014). These modern approaches are acknowledged for their potential efficiency in comparison to conventional methods (Seor and Park, 2020).

    However, the adoption of online learning is less common in the maritime education sector compared to other industries (Galic et al., 2020). Seafarers have limited access to the internet compared to onshore workers (Aimilia et al., 2014). This limitation extends to a wide range of vessels, from ocean going vessels to smaller coastal ships. Consequently, seafarers struggle to connect to internet access when sailing away from ports. Therefore, the utilization of online learning platforms in the maritime domain was limited compared to other sectors. Nonetheless, the adoption of responsive online learning may have the potential to be more effective than the traditional classroom-based approach (Hurlbut, 2018) and the shift towards digital education could lead to enhanced learning outcomes and a more engaging educational experience for maritime students.

    Thus, in this paper, rule-based, auto-scoring online platform was developed and utilized to assess students’ level of Maritime English competencies. And then, their learning efficiency is further analyzed by incorporating the students’ personal elements and their usage of the platform. The primary objective is to measure the effectiveness of the platform in assessing students' proficiencies and skill improvement in Maritime English, specifically in the domains of speaking, writing, and reading. This will seek for potential benefits of integrating digital learning strategies in maritime education, particularly in enhancing language skills and overall educational outcomes.

    2. Designing a Platform Reflecting the Need for Maritime English Education

    2.1 International standard on Maritime English

    The STCW is an international maritime convention established by IMO. Its purpose is to set minimum training, certification, and watchkeeping standards for seafarers worldwide and further ensures that seafarers are adequately trained, qualified, and competent to perform their duties on board ships.

    And to better support the implementation of this STCW Convention, the IMO has developed IMO Model Courses. These are designed to provide standardized training programs for the roles and responsibilities on ships. The IMO Model Courses cover a wide range of topics including navigation, first aid, cargo handling, and ship security. It also covers Maritime English which provides a guide to marine trainers for teaching the prospective maritime officers onboard. Because of the significance of these documents, maritime institutions utilize the IMO model course as a standard instructional tool for teaching Maritime English.

    As stated in IMO model course, Maritime English, in nature, has many special characteristics different from general English. For example, it has unique grammatical structure, follows formalized communication protocols, uses maritime context technical terms to ensure clarity and precision in communication. One of its important features is the use of standardized phrases which is well explained in the SMCP. ‘To avoid confusion and error, there is a need to standardize the language used’. SMCP emphasizes that the increasing number of internationally trading vessels, with diverse linguistic backgrounds of crews may bring higher chance of causing misunderstandings that can lead to danger to the vessel, people and the marine environment. Thus, it is necessary to deploy precise, simple and unambiguous communication methods. Simply put SMCP takes its role to avoid misunderstandings, thus, it is important to learn and practice SMCP for marine officers.

    However, recent research has highlighted a lack of Maritime English education for seafarers, particularly in adherence to the IMO Model Course and SMCP guidelines (Seor, 2020). In response to this finding, this research intends to develop a platform that incorporates Maritime English materials, including SMCP, and subsequently evaluate its effectiveness.

    2.2 Designing a platform to assess students’ proficiency in using SMCP

    The SMCP is structured into three distinct segments: General Provision, Part A, and Part B. General provision outlines the principles of using Maritime English in general. It explains the rules of utilizing SMCP, for instance, how messages should be conveyed or structured to ensure clear comprehension by listeners. This section also encompasses a glossary that is used onboard. Part A, as acknowledged by the International Maritime Organization (IMO, 2002), is a mandatory learning component for mariners due to its critical significance. In contrast, Part B is not designated as a mandatory section, however, mariners are still expected to acquire a working knowledge of it, thereby building a comprehensive understanding and proficiency in maritime communications. Given the importance of each segment, part A and the glossary, which are considered mandatory, have been covered when designing the Maritime English platform.

    In terms of language input, the platform comprehensively addresses three areas of communication skills: speaking, reading, and writing, evaluated through four distinct types of assessments. Part one is dedicated to assessing speaking proficiency, part two addresses learner’s writing ability and part three and four are designed to challenge learners with sentence completion and blank fill-in exercises, which also serve as reading exercises. The detailed guidelines for each question type are explained in the following section.

    • Part One: Speaking Assessment; in this segment, students are prompted to articulate phrases using the standardized maritime communication sentences. Upon clicking the textbox, the system begins recording their responses for transcription (Fig. 1). To conclude the recording, students can either click outside the textbox or select the ‘next’ button. To enhance the user experience, the screen displays both the login time and the remaining time available for the test.

    • Part Two: Translation Exercise; In this section, students are presented with Korean SMCP phrases and are tasked with translating them into English. Similar to the design of the speaking assessment, the top of the screen displays the remaining time, allowing students to gauge their progress (Fig. 2).

    • Part Three: Glossary Exercise; In part three, students are asked to fill in the glossary part of the SMCP (Fig. 3). To assist the process, a 'hint' feature is available. Upon activating this button, students are presented with the initial letters of the missing term, accompanied by indicators showing the number of characters in the word.

    • Part Four: Fill-in-the-Blank Challenge; For this activity, sentences are extracted from SMCP Part A, with some of the keywords being omitted. Students are required to complete these sentences by filling in the blanks. Hints are also provided to facilitate this process by unveiling the first letter of the missing word, offering students a clue for the answer (Fig. 4).

    To summarize, ME tests consist of four parts: speaking, translation questions, SMCP glossaries and vocabularies extracted from SMCP Part A. For the number of questions, translation part has seven questions, SMCP glossary has six and SMCP Part A vocabulary has six questions. The speaking portion of the test includes only one question, primarily due to the testing environment where around 60 individuals are expected to participate in a single classroom setting. Having only one question for the speaking section helps minimize any classroom disruptions or interference, ensuring a smoother testing process. Each question accounts for 100 points, thus, the full mark for one set of ME test is 2,000.

    To ascertain the educational efficacy of the platform with greater precision, every phrase and word from the SMCP has been utilized without alterations or modifications. Experts with a background of over five years in Maritime English education have been engaged in extracting these questions.

    Furthermore, to ensure fairness in both learning and evaluation processes, all questions derived from the SMCP were randomly selected from a comprehensive pool of questions stored in the database. Thus, students get a different set of questions every time they login for the platform.

    In addition to the random selection of questions, each test set aimed to maintain a certain level of difficulty by categorizing questions according to their levels — hard, medium, and easy — each of 20% 60% and 20% respectively as presented in Table 1. In this process, the Korean system of seafarers’ license test was taken as a reference and consultancy was sought with a seafarers' testing examiner for the proper weightage of the difficulties.

    Upon completion of the test, once students click the final submission button, their scores are promptly recorded in the server's database. The management of this data is facilitated through the utilization of Structured Query Language (SQL). Administrators are granted the capability to not only review the submitted scores but also monitor students' activity on the platform, including the time spent, the number of tests undertaken, test scores, and other activity records. Fig. 5 shows the steps involved in taking a set of the designed Maritime English (ME) tests.

    2.3 Scoring process

    Once the questions are structured, the next focus is how to accurately assess and score the students’ input. For the evaluation of responses, the similar_text function was employed to compute character-based similarity. This method is implemented in the PHP programming language through the similar_text function, and the resulting similarity rate between strings is presented as a percentage (Rahman et al., 2007). For example, should the correct answer be "world" while "woold" is provided as the response, then a match is found for 4 out of the 5 characters, reflecting a similarity rate of 80%. As a result, the students’ score is computed as 80 points.

    As the number of characters increases and words combine to form sentences, comparing similarity becomes more complex. In cases of longer sentences, a recursive method is employed to calculate the similarity against other strings. In other words, after identifying the segment of the sentence that has the highest match, the algorithm proceeds to assess the similarity of the remaining parts of the sentences. As shown in equation (1), this approach helps in evaluating the overall degree of match between the sentences.

    S s = 2 × W o r d C o u n t ( P 1 P 2 ) W o r d c o u n t ( P 1 ) + W o r d c o u n t ( P 2 )
    (1)

    • Ss = Similarity of Sentences

    • P1 = Right Answer

    • P2 = Students’ Answer

    One of the advantages of using sentence similarity to score answers is that it allows grading based on the overall contents of the sentence. So, if there's a spelling mistake or a missing word in the answer, it won't mark the whole answer wrong; only the parts that are different lose points. This will allow for some points to be given for English composition questions, which makes the grading more fair and understandable.

    3. Testing Method and the Results Analysis

    3.1 Testing method

    A total of 58 second-grade students from the Navigation Department of Maritime High School participated in the study during the first semester of 2023. These second-grade students are seeking to enhance their practical maritime skills by applying the theoretical knowledge they acquired during their first year in school. To obtain a more precise understanding of students’ English proficiency, a ‘personal background’ survey was first conducted, covering their previous TOEIC English test scores, self-assessment of English skills, prior onboard experience, and overseas residency experience, etc. An initial test was administered in February 27th to all the participants to identify the correlation between the test results and their personal backgrounds. Subsequently, between early March and early April, students were instructed to undertake the test twice during their classes. On April 11th, a mid-term test was administered, with all students participating under uniform conditions. After the mid-term test, students were divided into two groups. One group freely utilized the platform, while the other group participated in the exams but did not use the platform during regular times. After eight weeks, a final exam was administered to measure the students' maritime English proficiency once again. However, due to personal circumstances, there were two absentees, leading to an analysis conducted on a total of 56 individuals. Fig. 6. represents a schematic of the exam overview spanning a total of 13 weeks from the first to the third exam.

    The first test was focused on identifying the correlation between students' previous experiences or personal backgrounds and their test scores. The second test was conducted to see the overall improvement in student grades throughout the six weeks of onboard training, and the third test measured the efficacy of platform usage between the groups. For the analysis, a span of sixteen weeks was considered, encompassing 425 sets, approximately 1,700 individual tests, and 8,000 questions. The analysis was conducted using Microsoft Excel, IBM SPSS Statistics 23.0 with the aim of determining the significance of each student's weekly academic performance and variations in their scores.

    3.2 Score measurements and collection

    The platform utilized programming languages such as PHP, HTML, CSS, and JavaScript. For data management, the system employed MySQL, and Apache software was used for platform deployment. These has enabled the collection of students’ test scores and their activities during their usage. Table 2 shows the collected scores results from the initial test to the final tests.

    3.3 Score comparison between the initial test and mid-term test

    To identify any notable disparities between the groups of variables, hypotheses were established for both independent and dependent t-tests. The study was assessed at the 0.05 significance level; consequently, if the p-value was below 0.05, the null hypothesis was rejected, indicating significant differences between the groups. In this analysis, the T-value was also computed to exhibit the difference in terms of units of standard error.

    Table 3 presents the comparison of the scores of the initial test and mid-term tests. The initial test score accounted for 863.35, but the score had increased to 1113.6 in the final tests. To know if there are any significant score differences, paired sample t-test was conducted between initial test and mid-term test.

    The results indicate a significant difference between the initial test and mid-term test (p < .05). Thus, the total six weeks of education and training onboard contributed to the enhancement of students' Maritime English performance. Conversely, an analysis was additionally conducted on students' test-taking time. The initial test had an average completion time of 751.6 seconds, while the mid-term test was completed in an average time of 659.7 seconds; this difference did not exhibit any statistical significance.

    3.4 Score comparison between the mid-term test and final test

    Following the mid-term test, 11 percent of the students (a total of 6 individuals) engaged with using the ME testing platform. Before the final test, they submitted results for more than 17 test sets and dedicated approximately 132 minutes to platform usage over the seven-week period from April to June. The independent t test results between two group on mid-term test and Final test are shown in Table 4. The result shows that in the mid-term test, there were no significant score or test taking time difference between the group.

    However, as shown in Table 5, the final test indicate that there are significant score differences between the group. But as in the case of mid-term test results, in terms of test taking time, there were no significant differences between the group.

    Paper analyzed score improvement based on the types of questions. As shown in Table 6, enhancement in scores was observed across all tests. The score improvement from the speech test was 46.9, from the sentence test was 24.0, from the glossary part was 12.6, and from the blank-fill in was 31.8. The hierarchy of score improvement was as follows: speech, blank-fill in, sentences, and glossary as shown in Fig. 7. The speech section comprised a total of 69 questions, sentences had 74 questions, glossary had 98 questions, and blank-fill in had 53 questions. Given the number of questions in each test set, the probability of a question appearing in the test was 1.4% for speech, 9.5% for sentences, 6.1% for glossary, and 11.3% for blank-fill in. The scores enhancement was observed to align with the frequency of question occurrence in the tests, with the exception of the speech test.

    3.5 Students previous background and SMCP test scores

    Fig. 8. indicates the average scores of the maritime English exam based on the TOEIC test scores. A general trend revealed a positive correlation wherein higher TOEIC scores corresponded with elevated maritime English test average scores. This observation was further scrutinized using an ANOVA test. Initially, the Levene's test affirmed the assumption of homogeneity of variances across groups for the initial exam (p<0.05).

    For the initial exam, the ANOVA test yielded a p-value of 0.03, signifying a significant difference in Maritime English test scores among the various TOEIC score groups. Subsequent to this, a Scheffe post-hoc test was conducted, which unveiled that the group with TOEIC scores of 600 and above outperformed the group with TOEIC scores ranging from 200 to 400, with p-values of 0.008 and 0.027, respectively. Regarding the mid-term examination, the ANOVA test yielded a p-value of 0.002, and the Scheffe post-hoc test confirmed the earlier findings, indicating a significant outperformance by the group with TOEIC scores of 600 and above as compared to the group with TOEIC scores ranging from 200 to 400, with p-values of 0.018 and 0.030, respectively. However, in the case of the final examination, the p-value did not exhibit any significant disparity across TOEIC score groups.

    Furthermore, a survey was administered with regard to students’ backgrounds. The questions include, experiences of residing abroad, past onboard experiences, past general English test scores, perceived necessity of English study etc. Among these factors, a notable difference was identified between groups with different TOEIC Speaking scores, self-assessed English proficiency, and time devoted to English self-study. These divergences were statistically significant, confirming the potential correlation of these factors on the individuals' Maritime English test performances.

    3.6 Finding the necessary study time to qualify as a junior officer onboard

    The differences in students’ Maritime English test scores were examined, taking into account the amount of time spent studying general English and the time dedicated to using the Maritime English learning platform. The average weekly general English study time for the participating group was 3.2 hours (192.5 minutes), while for the non-participating group, it was 3.5 hours (211 minutes). Assuming that studying English directly influences the improvement of Maritime English exam scores, the enhancement in the scores of the participating group was analyzed by differentiating between time spent on general English study and utilization of the Maritime English online learning platform. The score improvement for the non-participating group was 206.1, indicating an increase of 0.1529 points for every minute spent on general English study. Thus, for the participating group, it was calculated that 193.6 points of enhancement were due to general English study, while 255.4 points were attributed to platform utilization. Considering the average weekly platform utilization time is 18.9 minutes for the participating group, it was found that every minute spent on learning platform study led to a score improvement of 1.93 points (out of a total of 2000 points).

    To investigate the minimum Maritime English test scores suitable for junior officers, a survey was conducted among a total of seven individual experts who have held a maritime license above second grade and have over seven years of working experience in the maritime industry. They were provided with information about the tests, and were asked to identify the minimum scores required to qualify as junior and senior officers onboard. The suggested minimum test score from the experts are shown in Table 7.

    To determine the necessary amount of time needed on the Maritime English testing platform to achieve suitable scores for junior officers, an analysis was further conducted focusing on the lowest-performing 20 percent of students. The average exam score of the bottom 20 percent of students was 623 points, which is 524.5 points lower than the expert-advised appropriate score of 1147.5. Based on the weeks of research findings, it was assumed that to raise the 524.5 points, about 57.17 hours (3,430.3 minutes) of general English study or about 4.68 hours (281.08 minutes) of smart learning platform utilization are required. Therefore, on a 13-week basis, it can be concluded that a minimum of 4.39 hours (263.8 minutes) of general English study per week or at least 22 minutes of Maritime English learning platform utilization per week are necessary to meet the Maritime English exam scores suitable for junior officers. In other words, the use of ME platform has brought more efficacies rather than learning general English. However, to reach such conclusions, further assumptions are necessary, and future research should be conducted to address the issues below for improvement. First, this analysis assumed that students' improvement is exactly proportional to their study time. However, it's important to note that students' score improvement might not increase in direct proportion to their study time. Second, the range of questions in the Maritime English test platform is quite limited, making it difficult to accurately reflect maritime English proficiency. Thus further study is necessary to address this issues.

    4. Conclusion

    In this study, efforts were made to establish an online learning and testing platform to comprehensively assess students’ proficiency in Maritime English. A thirteen week experiment was carried out and it was found that the testing platform usage was particularly effective in helping students with speaking skills compared to glossary, translation, or sentence-based questions. Further analysis was done to understand the correlation between students’ background with Maritime English test scores. The result shows that the group with certain personal background (e.g., high TOEIC test score) has brought score difference with other group of students. Finally the analysis was conducted to analyze how much time students would need to reach certain scores. The study found that using the Maritime English platform is more time-efficient for improving ME test scores than focusing solely on general English learning.

    Given the maritime industry's growing reliance on wireless networks, it is crucial to engage in more extensive research, encompassing a greater volume of data, to accurately ascertain the most effective ways to leverage online educational and learning platforms.

    Figure

    KOSOMES-29-7-930_F1.gif

    Example of speaking question.

    KOSOMES-29-7-930_F2.gif

    Example of translation question.

    KOSOMES-29-7-930_F3.gif

    Example of glossary question.

    KOSOMES-29-7-930_F4.gif

    Example of fill-in-the-blank question.

    KOSOMES-29-7-930_F5.gif

    Steps involved in taking a set of test.

    KOSOMES-29-7-930_F6.gif

    Test schedule overview.

    KOSOMES-29-7-930_F7.gif

    Overview of the score improvement.

    KOSOMES-29-7-930_F8.gif

    ME test score depending on TOEIC score.

    Table

    Different difficulties of questions in one test set

    Test score results from initial to final test

    Test score comparison (initial - mid-term test)

    Test score comparison (mid-term)

    P: Participating Group, N: Non-participating Group

    Test score comparison (mid-term)

    Score improvement depending on the question type

    Suggested minimum test score from the experts

    Reference

    1. Aimilia, P. , S. Damian, and I. Theotokas (2014), Communication, Internet Access and Retention of Seafarers in the Shipping Industry, International Association of Maritime Economists Conference, July 2014, pp. 9-10.
    2. Chaudhary, A. , G. Agrawal and M. Jharia (2014), A Review on Applications of Smart Class and E-Learning, International Journal of Scientific Engineering and Research, Vol. 2, pp. 79-80.
    3. Galic, S. , Z. Lusic and T. Stanivuk (2020), E-Learning in Maritime Affairs, Journal of Naval Architecture and Marine Engineering, Vol. 17, No. 1, pp. 38-50.
    4. Gracia, V. D. , A. M. Navarro, J. L. R. Sanchez and R. G. Losada (2022), Digitalization and Digital Transformation in Higher Education: A Bibliometric Analysis, Frontiers in Psychology, Vol. 13, p. 9.
    5. Hurlbut, A. R. (2018), Online vs. traditional learning in teacher education: a comparison of student progress, American Journal of Distance Education, Vol. 32, pp. 248-266.
    6. IMO (2002), Resolution A.918(22), Standard Marine Communication Phrases, 25 January 2002, pp. 11-12.
    7. Rahman, K. A. , S. Ahmad and M. J. Nordin (2007), The Design of an Automated C Programming Assessment Using Pseudo-code Comparison Technique, National Conference on Software Engineering and Computer Systems, p. 6.
    8. Reza, Z. , B. Zakirul, M. Rodriguez and K. Heikki (2017), Maritime Energy Management System (MariEMS) Online Delivery Platform, Conference Report of International Association Maritime Universities, December 2017, pp. 255-264.
    9. Seor, J. K. and Y. S. Park (2020), A Study on the Educational Efficacy of a Maritime English Learning and Testing Platform, The Korean Society Of Marine Environment & Safety, Vol. 26, No. 4, pp. 374-381.
    10. Seor, J. K. (2020), A Basic Study on the Development of a Smart Learning Platform for Enhancing the Efficacy of Maritime English Education, Korea Maritime and Ocean University, Master Degree Dissertation.
    11. UNESCO (2022), Guidelines for ICT in Education Policies and Masterplans, p. 92.