https://www.ijobas.pelnus.ac.id/index.php/ijobas/issue/feedInternational Journal of Basic and Applied Science2026-03-31T00:00:00+00:00Dr. Desi Vinsensia.M.Siijobas@pelnus.ac.idOpen Journal Systems<p style="text-align: justify;"><strong>International Journal of Basic and Applied Science</strong>, is a <strong>Basic and Applied Science</strong> published online by Institute of Computer Science (IOCS). <strong>International Journal of Basic and Applied Science</strong> published <strong>4 times a year (March, June, September and December)</strong>, Each issue consists of a minimum of 5 articles, the scope of this journal is Basic and Applied Science.</p> <h3 style="text-align: justify;">Online Submissions</h3> <p style="text-align: justify;">Already have a Username/Password for <strong>International Journal of Basic and Applied Science?</strong><br /><a class="action" href="https://ijobas.pelnus.ac.id/index.php/ijobas/login">GO TO LOGIN</a></p> <p style="text-align: justify;">Need a Username/Password?<br /><a class="action" href="https://ijobas.pelnus.ac.id/index.php/ijobas/user/register">GO TO REGISTRATION</a></p> <table class="data" style="height: 277px; width: 100%;" border="0" width="100%"> <tbody> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">Journal title</td> <td style="width: 452.087px; height: 18px;"><strong> <em>: International Journal of Basic and Applied Science</em><br /></strong></td> </tr> <tr style="height: 36px;" valign="top"> <td style="width: 110.312px; height: 36px;">Title abbreviation</td> <td style="width: 452.087px; height: 36px;"><strong> <em>: IJOBAS</em><br /></strong></td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">Subjects</td> <td style="width: 452.087px; height: 18px;"><em>: Basic and Applied Science</em></td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">Language</td> <td style="width: 452.087px; height: 18px;">: English</td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">ISSN</td> <td style="width: 452.087px; height: 18px;">: ISSN <a href="https://issn.brin.go.id/terbit/detail/1340777007" target="_blank" rel="noopener">2301-8038</a> (Print) | ISSN <a href="https://issn.brin.go.id/terbit/detail/20210417011308756" target="_blank" rel="noopener">2776-3013</a> (Online)</td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">Frequency</td> <td style="width: 452.087px; height: 18px;">: 4 issues per year (March, June, Sept, and Dece)</td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">DOI</td> <td style="width: 452.087px; height: 18px;">: 10.35335/ijobas - by Crossref</td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">OAI</td> <td style="width: 452.087px; height: 18px;">: <a href="https://ijobas.pelnus.ac.id/index.php/ijobas/oai" target="_blank" rel="noopener">https://ijobas.pelnus.ac.id/index.php/ijobas/oai</a></td> </tr> <tr style="height: 18px;" valign="top"> <td style="width: 110.312px; height: 18px;">Editor-in-chief</td> <td style="width: 452.087px; height: 18px;">: <strong>Desi Vinsensia</strong></td> </tr> <tr style="height: 46px;" valign="top"> <td style="width: 110.312px; height: 46px;">Publisher</td> <td style="width: 452.087px; height: 46px;"> <p>: Institute of Computer Scicence (IOCS)</p> </td> </tr> <tr style="height: 51px;" valign="top"> <td style="width: 110.312px; height: 51px;">INDEXING BY</td> <td style="width: 452.087px; height: 51px;"> <p>: <a href="https://www.scopus.com/sourceid/21101281043" target="_blank" rel="noopener">SCOPUS <img src="https://ieeeaccess.ieee.org/wp-content/uploads/2014/10/scopus-transparent.png" alt="" width="25" height="25" /></a></p> </td> </tr> </tbody> </table>https://www.ijobas.pelnus.ac.id/index.php/ijobas/article/view/798Electrooculography Based Control of a Robotic Manipulator with Dual Cameras for Object Retrieval2026-01-18T06:57:31+00:00Muhammad Ilhamdi Rusydirusydi@eng.unand.ac.idAndre Paskah Gultom2010951003_andre@student.unand.ac.idAdam Jordan2220952010_adam@student.unand.ac.idRahmad Novan Nurhadi2320952002_rahmad@student.unand.ac.idDarwison Darwisondarwison@eng.unand.ac.id<p>This study presents an assistive control system for a four-degree-of-freedom (4-DoF) robotic manipulator that integrates image-based spatial perception with electrooculography (EOG)-based human–machine interaction for three-dimensional object retrieval. The system is motivated by the need for intuitive, non-contact assistive technologies to support individuals with severe motor impairments, such as tetraplegia, in performing basic manipulation tasks. The proposed framework employs an orthogonal dual-camera vision configuration to achieve explicit 3D target localization, where planar object positions on the XY plane and depth along the Z axis are estimated using focal length–based geometric modeling. User commands are generated through an EOG interface, in which eye movements and voluntary blinks are classified using a K-Nearest Neighbor (KNN) algorithm to control manipulator motion. Compared to conventional assistive robotic systems that rely on depth sensors or high-degree-of-freedom manipulators, the proposed approach utilizes asymmetric monocular viewpoints and a minimal 4-DoF architecture to reduce system complexity. Experimental results demonstrate high performance, achieving average localization accuracies of 99.52% on the XY plane and 95.88% along the Z axis, as well as an EOG classification accuracy of 94.38%. Manipulation experiments confirmed reliable operation with a 100% task success rate, while task completion time and positional error increased gradually with target distance. These findings validate the feasibility of the proposed system as a low-complexity, high-accuracy assistive robotic solution for rehabilitation and human–machine interaction applications.</p>2026-03-31T00:00:00+00:00Copyright (c) 2026 Muhammad Ilhamdi Rusydi, Andre Paskah Gultom, Adam Jordan, Rahmad Novan Nurhadi, Darwison Darwisonhttps://www.ijobas.pelnus.ac.id/index.php/ijobas/article/view/841Augmented Reality Applications for Enhancing Environmental Awareness in Smart Tourism: A Systematic Literature Review2026-02-17T09:59:32+00:00Victor Marudut Mulia Siregarvictor.siregar2@gmail.comAndi Setiadi Manaluandi.manaloe@gmail.comRoy Sahputra Saragihroysahputra31@gmail.com<p>Augmented Reality (AR) has been increasingly adopted in smart tourism to enhance visitor experiences and support sustainability-oriented learning. However, empirical evidence regarding how AR applications contribute to environmental awareness and sustainable tourism practices remains fragmented and insufficiently synthesized. This study conducts a systematic literature review to examine the role of AR in enhancing environmental awareness within smart tourism contexts and its potential contribution to sustainability-oriented tourism development. The review addresses three research questions concerning the implementation of AR applications for environmental learning, the comparative effectiveness of AR and non-AR approaches, and the key challenges and research opportunities associated with AR in sustainability-oriented tourism. The review follows the PICOC framework (Population, Intervention, Comparison, Outcome, Context) and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. A structured search of the Scopus database covering publications from 2022 to 2025 resulted in 32 empirical journal articles included in the final analysis. The findings indicate that AR applications, such as mobile AR systems, location-based interpretation, immersive environmental visualization, and gamified learning tools, are widely implemented in tourism environments including museums, heritage sites, geotourism destinations, natural parks, and wildlife attractions. Overall, AR tends to enhance environmental understanding, emotional engagement, and pro-environmental intentions more effectively than conventional interpretation media. These outcomes contribute to strengthening visitor awareness of environmental conservation and responsible tourism behavior. This review synthesizes fragmented empirical evidence and highlights key methodological and technological gaps while outlining future research directions for advancing AR-based environmental learning and sustainability practices within smart tourism ecosystems.</p>2026-03-31T00:00:00+00:00Copyright (c) 2026 Victor Marudut Mulia Siregar, Andi Setiadi Manalu, Roy Sahputra Saragihhttps://www.ijobas.pelnus.ac.id/index.php/ijobas/article/view/845Explainable Mitochondrial Image Segmentation and Morphological Quantification using Deep Learning Based Framework2025-12-30T12:03:07+00:00Vandana Malikvandana@uithpu.ac.inA. J Singhsingh@uithpu.ac.in<p>Mitochondria is an essential cell organelle with varying shape and size. A slight change in mitochondrial morphology leads to neurodegenerative diseases. The advanced deep learning-based models like U-Net, Mark R-CNN, MitoNet, MitoStructSeg, MitoSkel perform accurate mitochondrial image analysis by performing image segmentation or morphological quantification but are devoid of the ability to interpret the results produced. This research work proposed a novel unified XM-DL framework (Explainable Mitochondrial Deep Learning Based Framework) capable of performing multiple tasks like image segmentation, morphological quantification, classification of mitochondria on the basis of their shape, and interpreting results by using explainable artificial intelligence (XAI) techniques as a single pipeline. The XM-DL framework is composed of U-Net architecture integrated with residual connections, skip connections, and attention gates for performing image segmentation, followed by a post processing module for morphological quantification and utilizing Gradient Class Activation Mapping (Grad-CAM) as explainable AI and form a unique pipeline. The XM-DL framework was trained on the MitoEM dataset and achieved a high F1 score of 0.9322 and IoU (intersection over union) of 0.8793 for image segmentation task. The XM-DL framework provides assistance to the medical service providers by improving the interpretability and understanding about the deep learning techniques.</p>2026-03-31T00:00:00+00:00Copyright (c) 2026 Vandana Malik, A. J Singhhttps://www.ijobas.pelnus.ac.id/index.php/ijobas/article/view/771Enhancing XGBoost performance for classification tasks using particle swarm optimization and SHAP-based model interpretability2025-10-03T03:13:43+00:00Mohammad Andri Budimanmandrib@usu.ac.idJonson Manurungjhonson.geo@gmail.com<p>Phishing remains one of the most critical and rapidly evolving cyber threats, with increasing incidents that challenge conventional detection mechanisms such as blacklist-based approaches. Although machine learning models have improved phishing detection accuracy, many studies emphasize performance optimization without adequately addressing model interpretability and transparent decision-making. This study aims to develop an optimized and explainable phishing detection framework by integrating XGBoost with Particle Swarm Optimization (PSO) for hyperparameter tuning and SHAP for interpretability analysis. The proposed approach was evaluated on the UCI Phishing Websites dataset consisting of 11,055 samples and 30 features, using accuracy, precision, recall, F1-score, and ROC-AUC as performance metrics. Experimental results show that XGBoost optimized using PSO achieved the best performance with an accuracy of 0.911, precision of 0.906, recall of 0.902, F1-score of 0.904, and ROC-AUC of 0.935, outperforming Random Forest (accuracy 0.896; ROC-AUC 0.921), SVM (accuracy 0.872; ROC-AUC 0.903), and XGBoost with default hyperparameters (accuracy 0.842; ROC-AUC 0.875). Furthermore, SHAP analysis identified key influential features such as Have_IP and URL_Length, providing transparent insights into model decisions. These findings demonstrate that combining metaheuristic optimization with explainable AI significantly enhances both predictive performance and interpretability, contributing to the development of reliable and trustworthy phishing detection systems in dynamic cybersecurity environments.</p>2026-03-31T00:00:00+00:00Copyright (c) 2026 Mohammad Andri Budiman, Jonson Manurunghttps://www.ijobas.pelnus.ac.id/index.php/ijobas/article/view/850 Scenario based two stage production planning for cassava SMEs under demand uncertainty2026-01-07T06:08:35+00:00Dedy Juliandry Panjaitanjuliandrydedy@gmail.comRima ApriliaAprilia@uinsu.ac.idFirmansyah FirmansyahFirmansyah@umnaw.ac.id<p>Production planning in small and medium sized enterprises (SMEs) is commonly based on deterministic assumptions that do not fully reflect uncertain market demand. This study develops a scenario-based production planning approach to support feasible and cost efficient decisions under demand uncertainty. Two stage stochastic programming model with demand scenarios is applied to a real multi-product SME (small medium enterprises) case, where three demand scenarios pessimistic, most likely, and optimistic are constructed from historical data. The model incorporates production costs, raw material availability, labor capacity, and machine capacity constraints and is solved using a standard linear programming solver with actual operational data. The results indicate that optimal production quantities and total production costs vary across demand scenarios due to differences in demand limits and resource availability. While deterministic planning becomes infeasible under extreme demand conditions, the proposed Two stage stochastic programming model consistently produces feasible and cost efficient production plans, resulting in consistently feasible solutions across all demand scenarios, and highlighting its usefulness as a practical decision support tool for SMEs facing demand uncertainty.</p>2026-04-13T00:00:00+00:00Copyright (c) 2026 Dedy Juliandry Panjaitan, Rima Aprilia, Firmansyah Firmansyah