Association of occupational physical activity and sedentary behaviour with the risk of hepatocellular carcinoma: a case-control study based on the Inpatient Clinico-Occupational Database of Rosai Hospital Group

Objectives
While there is growing evidence that physical activity reduces the risk of hepatocellular carcinoma (HCC), the impact of occupational physical activity and sedentary behaviour remains unclear. This study aimed to investigate the associations between occupational physical activity and sedentary behaviour and HCC risk.

Design
Matched case-control study.

Setting
Nationwide multicentre, hospital-inpatient data set in Japan, from 2005 to 2021.

Participants
The study included 5625 inpatients diagnosed with HCC and 27 792 matched controls without liver disease or neoplasms. Participants were matched based on sex, age, admission date, and hospital.

Primary measures
The association between levels of occupational physical activity (low, medium, high) and sedentary time (short, medium, long) with the risk of HCC.

Secondary measures
Stratification of HCC risk by viral infection status (hepatitis B/C virus), alcohol consumption levels and the presence of metabolic diseases (hypertension, diabetes, dyslipidaemia, obesity).

Results
High occupational physical activity was not associated with HCC caused by hepatitis B/C virus infection in men. In women, high occupational physical activity was associated with a reduced risk of non-viral HCC, with ORs (95% CIs) of 0.65 (0.45–0.93). Among patients with non-viral HCC, medium occupational physical activity combined with medium alcohol intake further decreased the HCC risk in men with an OR of 0.70 (0.50–0.97), while high occupational physical activity combined with lowest alcohol intake decreased the HCC risk in women with an OR of 0.69 (0.48–0.99). Men and women with medium sedentary time had a lower HCC risk compared with those with long sedentary time, with ORs of 0.88 (0.79–0.98) in men and 0.77 (0.62–0.97) in women, respectively. In patients without viral infection or alcohol use, medium sedentary time reduced the HCC risk associated with fatty liver disease without comorbid metabolic diseases in women.

Conclusions
High levels of occupational physical activity and/or medium periods of sedentary time are associated with a reduced risk of HCC, particularly non-alcoholic steatohepatitis.

Leggi
Marzo 2025

Aetiopathogenesis, clinico-epidemiological and diagnostic aspects of human ocular trematode infections: a scoping review protocol

Introduction
Human ocular trematode infections, caused by parasitic flatworms, are a significant public health concern worldwide. It can lead to mild-to-severe consequences if untreated. This protocol outlines a scoping review methodology, which aims to explore the knowledge on the aetiopathogenesis, clinico-epidemiological and diagnostic aspects, and patient perspectives related to ocular trematode infections in humans.

Methods and analysis
The review, including the development of the review protocol, will be conducted over 2 years from January 2024. The Joanna Briggs Institute Reviewers’ Manual and the framework developed by Arksey and O’Malley will be used as the guidelines for the scoping review that is suggested in this protocol. Accordingly, the PCC (Population, Concept, Context) framework and three-stage search strategy will be used to develop the research question and to conduct the search respectively. Publications up to December 2024 will be searched across multiple databases, including MEDLINE/PubMed, Scopus, Science Direct, CINAHL and Google Scholar.

Ethics and dissemination
Since the study will make use of secondary data, ethical approval will not be required. The scoping review’s findings will be published in a scientific journal and presented at relevant conferences, aiming to improve the disease outcomes through guiding future research in ocular trematode infections and informing potential strategies to uplift the disease control and prevention measures and patient care.

Leggi
Febbraio 2025

Abstract WP19: Assessing the Ability of ChatGPT to Guide the Decision for Intravenous Thrombolysis in Patients with Acute Ischemic Strokes

Stroke, Volume 56, Issue Suppl_1, Page AWP19-AWP19, February 1, 2025. Introduction:Artificial intelligence is emerging as an adjunct promising tool in medicine. ChatGPT, an AI chatbot that uses machine learning to create human-like dialogue, has shown strong potential in the medical field, specifically aiding professionals in clinical reasoning and diagnosis. We aim to assess the ability of ChatGPT to guide the decision for intravenous thrombolysis (IVT) in patients with acute ischemic strokes (AIS).Methods:The performance of ChatGPT in clinical decision making for IVT was compared with that of a board-certified stroke neurologist using artificially created AIS scenarios, covering a broad range of indications and contraindications. ChatGPT was asked to act like an acute stroke clinical decision tool and to answer with a “yes” or “no” along with a 1-sentence justification for its answer. The accuracy and interpretation skills of ChatGPT and the stroke neurologist were analyzed by a blinded assessor, with more than 10 years of experience in stroke neurology.Results:Out of the 20 scenarios, ChatGPT’s decision to pursue or withhold IVT was deemed congruent with that of the stroke neurologist’s in 16 scenarios (80%). Based on the blinded assessor’s judgement, two clinical decisions made by ChatGPT and one clinical decision made by the stroke neurologist were rendered wrong. In one case, chatGPT’s reasoning was deemed to be incorrect but the correct clinical decision was made not to provide IVT. In another case, both the stroke neurologist and ChatGPT gave reasonable explanations despite different clinical decisions, both deemed plausible by the assessor. Overall, both chatGPT’s decision to pursue or withhold IVT and the explanation it provided were deemed accurate by the assessor in 17 scenarios (85%).Conclusion:This study shows that ChatGPT performed well in most scenarios, potentially reinforcing its ability to guide clinical decision making for IVT in AIS patients. However, ChatGPT is still prone to errors, as shown by its inability to consistently depict contraindications for IVT, and may still not be ready for independent use.

Leggi
Gennaio 2025

ChatGPT (GPT-4) versus doctors on complex cases of the Swedish family medicine specialist examination: an observational comparative study

Background
Recent breakthroughs in artificial intelligence research include the development of generative pretrained transformers (GPT). ChatGPT has been shown to perform well when answering several sets of medical multiple-choice questions. However, it has not been tested for writing free-text assessments of complex cases in primary care.

Objectives
To compare the performance of ChatGPT, version GPT-4, with that of real doctors.

Design and setting
A blinded observational comparative study conducted in the Swedish primary care setting. Responses from GPT-4 and real doctors to cases from the Swedish family medicine specialist examination were scored by blinded reviewers, and the scores were compared.

Participants
Anonymous responses from the Swedish family medicine specialist examination 2017–2022 were used.

Outcome measures
Primary: the mean difference in scores between GPT-4’s responses and randomly selected responses by human doctors, as well as between GPT-4’s responses and top-tier responses by human doctors. Secondary: the correlation between differences in response length and response score; the intraclass correlation coefficient between reviewers; and the percentage of maximum score achieved by each group in different subject categories.

Results
The mean scores were 6.0, 7.2 and 4.5 for randomly selected doctor responses, top-tier doctor responses and GPT-4 responses, respectively, on a 10-point scale. The scores for the random doctor responses were, on average, 1.6 points higher than those of GPT-4 (p

Leggi
Dicembre 2024

Exploring the use of ChatGPT-4o in enhancing career development counseling for medical students: a study protocol

Background
The advent of pre-trained generative transformers, exemplified by ChatGPT, has significantly impacted medical education, catalysing a wealth of research focused on enhancing educational methodologies. Despite this, the application of ChatGPT in the specific area of career planning for medical students remains relatively unexplored. This study seeks to rigorously evaluate the potential of ChatGPT-4o in facilitating the career planning of medical students, comparing its effectiveness with that of traditional human educators. It also aims to identify optimal strategies for integrating ChatGPT-4o with human educators to maximise support in career planning for medical students.

Methods
Adopting a mixed-methods approach, this research combines qualitative insights from interviews with quantitative data from questionnaires. The research is bifurcated into two primary segments: first, evaluating the proficiency of ChatGPT-4o in aiding medical students’ career planning, and second, identifying effective collaborative practices between ChatGPT-4o and human educators.

Discussion
The study focuses on assessing ChatGPT-4o’s utility in career planning for medical students and determining how it can be best used within medical education by both educators and students. The aim is to augment the career planning consultation process, thereby enhancing the efficiency and quality of human educators’ contributions. This includes exploring how ChatGPT-4o can supplement traditional teaching methods, providing a more robust and comprehensive support system for career planning in medical education.

Leggi
Novembre 2024

Abstract 4134824: Evaluation of ChatGPT-4.0 and Google Bard ‘s Capabilities in Clinical Decision Support in Cardiac Electrophysiology

Circulation, Volume 150, Issue Suppl_1, Page A4134824-A4134824, November 12, 2024. Background:ChatGPT-4.0 and Bard have shown clinical decision support (CDS) potential in general medicine, but their role in EP is unknown. This study aims to evaluate ChatGPT and Bard’s CDS potential by assessing their accuracy in multiple-choice questions (MCQs), guideline recommendations (GRs) and treatment (Tx) suggestions.Methods:Two chatbots were tested with 15 clinical vignettes (CVs) and 47 case-related MCQs from Heart Rhythm Case Reports, focusing on ablation, arrhythmia and CIEDs management. CVs included narrative diagnostic images results. 3 tasks were performed: 1) Generating GRs, rated 0 for incorrect or correct but irrelevant to the primary problem, 0.5 for correct for the primary problem, 1 for case-specific (CS) if relevant to both the primary problem and concomitant conditions (e.g. afib with HF); 2) Suggesting Tx steps, scored 0 for incorrect, 0.5 for correct and 1 for CS. Tx was deemed correct if referenced in the case or guidelines, and CS if used in the case. For Tx responses not CS, a prompt was provided before reassessment. The prompt included one similar CV and its Tx from PubMed case reports. 3) Answering MCQs, rated 1 for correct and 0 for incorrect. Welch’s T-test was used for analysis.Results:Bard outperformed ChatGPT in generating CS-GRs (P = 0.01). However, there was no significant difference in CS-Tx suggestions with a prompt (P value = 0.12, Figure 1C) or without a prompt (P value = 0.59, Figure 1A). When prompted for non-CS-Tx responses, ChatGPT significantly improved from 0.66 to 0.93 (P value = 0.02), suggesting an enhanced ability to provide CS-Tx plans post-prompt. In contrast, Bard showed no notable improvement (0.73 vs. 0.76, P value = 0.79, Figure 1B). Both chatbots demonstrated similar MCQ accuracy, with scores below 70%, indicating EP training gaps or the need for prompts to activate existing knowledge.Conclusion:This study showed Bard’s superiority in generating GRs and ChatGPT’s remarkable improvement in suggesting Tx when external knowledge is provided, revealing their CDS potential in specialized fields.

Leggi
Novembre 2024

Abstract 4145543: ChatGPT-4 Improves Readability of Institutional Heart Failure Patient Education Materials

Circulation, Volume 150, Issue Suppl_1, Page A4145543-A4145543, November 12, 2024. Introduction:Heart failure consists of complex management involving lifestyle modifications such as daily weights, fluid and sodium restriction, and blood pressure monitoring placing additional responsibility on patients and caregivers. Successful adherence requires comprehensive counseling and understandable patient education materials (PEMs). Prior research has shown that many PEMs related to cardiovascular disease exceed the American Medical Association’s 5th-6thgrade recommended reading level. The large language model (LLM) Chat Generative Pre-trained Transformer (ChatGPT) may be a useful adjunct resource for patients with heart failure to bridge this gap.Research Question:Can ChatGPT-4 improve heart failure institutional PEMs to meet the AMA’s recommended 5th-6thgrade reading level while maintaining accuracy and comprehensiveness?Methods:There were 143 heart failure PEMs collected from the websites of the top 10 institutions listed on the 2022-2023 US News&World Report for “Best Hospitals for Cardiology, Heart&Vascular Surgery”. The PEMs of each institution were entered into ChatGPT-4 (Version updated 20 July 2023) preceded by the prompt “please explain the following in simpler terms”. The readability of the institutional PEM and ChatGPT prompted response were both assessed usingTextstatlibrary in Python and theTextstat readabilitypackage in R software. The accuracy and comprehensiveness of each response were also assessed by a board-certified cardiologist.Results:The average Flesch-Kincaid grade reading level was 10.3 (IQR: 7.9, 13.1) vs 7.3 (IQR: 6.1, 8.5) for institutional PEMs and ChatGPT responses (p< 0.001), respectively. There were 13/143 (9.1%) institutional PEMs meeting a 6thgrade reading level which improved to 33/143 (23.1%) after prompting by ChatGPT-4. There was also a significant difference found for each readability metric assessed when comparing institutional PEMs with ChatGPT-4 responses (p

Leggi
Novembre 2024

Abstract 4116844: Simplifying Cardiology Research Abstracts: Assessing ChatGPT's Readability and Comprehensibility for Non-Medical Audiences

Circulation, Volume 150, Issue Suppl_1, Page A4116844-A4116844, November 12, 2024. Background:Artificial Intelligence (AI)-powered chatbots like ChatGPT are increasingly used in academic medical settings to help with tasks such as evidence synthesis and manuscript drafting. They have shown potential in simplifying complex medical texts for non-medical audiences like patients and journalists. However, less is known about whether simplified texts may exclude important information or be of interest to patients or other non-medically-trained people such as journalists.Objective:This study aims to assess ChatGPT’s capacity to simplify cardiology research abstracts by eliminating jargon and enhancing universal comprehension.Methods:We analyzed all abstracts and scientific statements published from July to November 2023 inCirculation(n=113). These abstracts were processed through ChatGPT with the prompt: “Please rewrite the following text to be comprehensible at a 5th-grade reading level. Retain all original information and exclude nothing”. We assessed the readability of both original and simplified texts using Flesch-Kincaid Grade Level (FKGL) and Reading Ease (FKRE) scores. Additionally, a panel of five physicians and five laypeople evaluated these texts for completeness, accuracy, and understandability.Results:ChatGPT transformation or abstracts reduced the required reading level from a college graduate to 8-9th grade by both FKGL (18.3 to 8.6; p

Leggi
Novembre 2024

Abstract 4142343: My AI Ate My Homework: Measuring ChatGPT Performance on the American College of Cardiology Self-Assessment Program

Circulation, Volume 150, Issue Suppl_1, Page A4142343-A4142343, November 12, 2024. Background:Artificial intelligence (AI) is a rapidly growing field with promising utility in health care. ChatGPT is a language learning model by OpenAI trained on extensive data to comprehend and answer a variety of questions, commands, and prompts. Despite the promise AI offers, there are still glaring deficiencies.Methods:310 questions from the American College of Cardiology Self-Assessment Program (ACCSAP) question bank were queried. 50% of questions from each of the following sections were randomly selected: coronary artery disease, arrhythmias, valvular disease, vascular disease, systemic hypertension and hypotension, pericardial disease, systemic disorders affecting the circulatory system, congenital heart disease, heart failure and cardiomyopathy, and pulmonary circulation. Questions were fed into ChatGPT Legacy 3.5 version with and without answer choices and the accuracy of its responses were recorded. Statistical analysis was performed using Microsoft Excel statistical package.Results:Human respondents were 77.86% accurate +/- 16.01% with an IQR of 21.08% on average. Without answer choice prompting, ChatGPT was correct 57.93% and inconclusive 7.77% of the time. When prompted with answer choices, ChatGPT was correct only 20.91% and inconclusive 14.55% of the time. Additionally, an average of 55.47% +/- 35.55% of human respondents with an IQR of 73.19% selected the same answer choice as ChatGPT. Finally, on a scale of 1 to 5, with 1 being the most picked and 5 being the least picked, human respondents selected the same response as ChatGPT an average of 1.66 out of 5. 30.32% or 94 of the 310 questions contained images in the question stem. Only 0.65% or 2 out of the 310 questions contained images in the answer choices.Conclusion:To our knowledge, the performance of ChatGPT in the field of cardiology board preparation is limited. Our analysis shows that while AI software has become increasingly more comprehensive, progress is still needed to accurately answer complex medical questions.

Leggi
Novembre 2024