Machine learning jargons and corresponding statistics jargons.
\\n\\n
\\n"}]',published:!0,mainMedia:{caption:"Milestone",originalUrl:"/media/original/124"}},components:[{type:"htmlEditorComponent",content:'
Barely three months into the new year and we are happy to announce a monumental milestone reached - 150 million downloads.
\n\nThis achievement solidifies IntechOpen’s place as a pioneer in Open Access publishing and the home to some of the most relevant scientific research available through Open Access.
\n\nWe are so proud to have worked with so many bright minds throughout the years who have helped us spread knowledge through the power of Open Access and we look forward to continuing to support some of the greatest thinkers of our day.
\n\nThank you for making IntechOpen your place of learning, sharing, and discovery, and here’s to 150 million more!
\n\n\n\n\n'}],latestNews:[{slug:"step-in-the-right-direction-intechopen-launches-a-portfolio-of-open-science-journals-20220414",title:"Step in the Right Direction: IntechOpen Launches a Portfolio of Open Science Journals"},{slug:"let-s-meet-at-london-book-fair-5-7-april-2022-olympia-london-20220321",title:"Let’s meet at London Book Fair, 5-7 April 2022, Olympia London"},{slug:"50-books-published-as-part-of-intechopen-and-knowledge-unlatched-ku-collaboration-20220316",title:"50 Books published as part of IntechOpen and Knowledge Unlatched (KU) Collaboration"},{slug:"intechopen-joins-the-united-nations-sustainable-development-goals-publishers-compact-20221702",title:"IntechOpen joins the United Nations Sustainable Development Goals Publishers Compact"},{slug:"intechopen-signs-exclusive-representation-agreement-with-lsr-libros-servicios-y-representaciones-s-a-de-c-v-20211123",title:"IntechOpen Signs Exclusive Representation Agreement with LSR Libros Servicios y Representaciones S.A. de C.V"},{slug:"intechopen-expands-partnership-with-research4life-20211110",title:"IntechOpen Expands Partnership with Research4Life"},{slug:"introducing-intechopen-book-series-a-new-publishing-format-for-oa-books-20210915",title:"Introducing IntechOpen Book Series - A New Publishing Format for OA Books"},{slug:"intechopen-identified-as-one-of-the-most-significant-contributor-to-oa-book-growth-in-doab-20210809",title:"IntechOpen Identified as One of the Most Significant Contributors to OA Book Growth in DOAB"}]},book:{item:{type:"book",id:"1357",leadTitle:null,fullTitle:"Diabetes - Damages and Treatments",title:"Diabetes",subtitle:"Damages and Treatments",reviewType:"peer-reviewed",abstract:"Over the last few decades the prevalence of diabetes has dramatically grown in most regions of the world. In 2010, 285 million people were diagnosed with diabetes and it is estimated that the number will increase to 438 million in 2030. Hypoglycemia is a disorder where the glucose serum concentration is usually low. The organism usually keeps the serum glucose concentration in a range of 70 to 110 mL/dL of blood. In hypoglycemia the glucose concentration normally remains lower than 50 mL/dL of blood. Hopefully, this book will be of help to many scientists, doctors, pharmacists, chemicals, and other experts in a variety of disciplines, both academic and industrial. In addition to supporting researcher and development, this book should be suitable for teaching.",isbn:null,printIsbn:"978-953-307-652-2",pdfIsbn:"978-953-51-6560-6",doi:"10.5772/1823",price:139,priceEur:155,priceUsd:179,slug:"diabetes-damages-and-treatments",numberOfPages:362,isOpenForSubmission:!1,isInWos:1,isInBkci:!1,hash:"acbef56950769cca36f4a9bb07838890",bookSignature:"Everlon Cid Rigobelo",publishedDate:"November 9th 2011",coverURL:"https://cdn.intechopen.com/books/images_new/1357.jpg",numberOfDownloads:112900,numberOfWosCitations:18,numberOfCrossrefCitations:8,numberOfCrossrefCitationsByBook:1,numberOfDimensionsCitations:31,numberOfDimensionsCitationsByBook:3,hasAltmetrics:1,numberOfTotalCitations:57,isAvailableForWebshopOrdering:!0,dateEndFirstStepPublish:"November 9th 2010",dateEndSecondStepPublish:"December 7th 2010",dateEndThirdStepPublish:"April 13th 2011",dateEndFourthStepPublish:"May 13th 2011",dateEndFifthStepPublish:"July 12th 2011",currentStepOfPublishingProcess:5,indexedIn:"1,2,3,4,5,6",editedByType:"Edited by",kuFlag:!1,featuredMarkup:null,editors:[{id:"39553",title:"Prof.",name:"Everlon",middleName:"Cid",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo",profilePictureURL:"https://mts.intechopen.com/storage/users/39553/images/system/39553.jpg",biography:"Dr. Everlon Cid Rigobelo graduated from Agronomy School Universidade Estadual Paulista Campus of Jaboticabal, Brazil, in 2000. He received his MSC. degree in Animal Science Microbiology from the same University in 2002. He obtained his Ph.D in Microbiology and works as researcher at the same University. Rigobelo has experience in genetics, epidemiology and is active in the following subjects: microbial biotechnology, molecular genetics and bacterial genomics. He works with the plant growth promoting rhizobacteria and he previously worked with probiotics strains to reduce the spread of Escherichia coli in animal production. He edited books entitled Probiotic, Probiotic in animals, Plant Growth and Symbiosis.",institutionString:"São Paulo State University",position:null,outsideEditionCount:0,totalCites:0,totalAuthoredChapters:"3",totalChapterViews:"0",totalEditedBooks:"7",institution:{name:"Sao Paulo State University",institutionURL:null,country:{name:"Brazil"}}}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,coeditorOne:null,coeditorTwo:null,coeditorThree:null,coeditorFour:null,coeditorFive:null,topics:[{id:"1013",title:"Pediatric Endocrinology",slug:"pediatric-endocrinology"}],chapters:[{id:"23123",title:"Management Approach to Hypoglycemia",doi:"10.5772/24781",slug:"management-approach-to-hypoglycemia",totalDownloads:12475,totalCrossrefCites:1,totalDimensionsCites:1,hasAltmetrics:0,abstract:null,signatures:"Miriam Ciljakova, Milos Jesenak, Miroslava Brndiarova and Peter Banovcin",downloadPdfUrl:"/chapter/pdf-download/23123",previewPdfUrl:"/chapter/pdf-preview/23123",authors:[{id:"59501",title:"Dr.",name:"Miroslava",surname:"Brndiarová",slug:"miroslava-brndiarova",fullName:"Miroslava Brndiarová"},{id:"59696",title:"Prof.",name:"Miriam",surname:"Ciljakova",slug:"miriam-ciljakova",fullName:"Miriam Ciljakova"}],corrections:null},{id:"23124",title:"Hypoglycemia in Children Attending the Critical Care Medicine in Developing Countries",doi:"10.5772/23406",slug:"hypoglycemia-in-children-attending-the-critical-care-medicine-in-developing-countries",totalDownloads:6712,totalCrossrefCites:0,totalDimensionsCites:2,hasAltmetrics:0,abstract:null,signatures:"Mohammod Jobayer Chisti, Tahmeed Ahmed, Hasan Ashraf, Abu Syeed Golam Faruque, Sayeeda Huq and Md Iqbal Hossain",downloadPdfUrl:"/chapter/pdf-download/23124",previewPdfUrl:"/chapter/pdf-preview/23124",authors:[{id:"12710",title:"Dr.",name:"Hasan",surname:"Ashraf",slug:"hasan-ashraf",fullName:"Hasan Ashraf"},{id:"51946",title:"Dr.",name:"Mohammod Jobayer",surname:"Chisti",slug:"mohammod-jobayer-chisti",fullName:"Mohammod Jobayer Chisti"},{id:"61375",title:"Prof.",name:"Tahmeed",surname:"Ahmed",slug:"tahmeed-ahmed",fullName:"Tahmeed Ahmed"},{id:"61376",title:"Prof.",name:"Sayeeda",surname:"Huq",slug:"sayeeda-huq",fullName:"Sayeeda Huq"},{id:"61377",title:"Prof.",name:"Md Iqbal",surname:"Hossain",slug:"md-iqbal-hossain",fullName:"Md Iqbal Hossain"},{id:"92349",title:"Dr.",name:"Abu Syeed Golam",surname:"Faruque",slug:"abu-syeed-golam-faruque",fullName:"Abu Syeed Golam Faruque"}],corrections:null},{id:"23125",title:"Brittle Diabetes: A Contemporary Review of the Myth and Its Realization",doi:"10.5772/23708",slug:"brittle-diabetes-a-contemporary-review-of-the-myth-and-its-realization",totalDownloads:9952,totalCrossrefCites:0,totalDimensionsCites:2,hasAltmetrics:0,abstract:null,signatures:"Christina Voulgari and Nicholas Tentolouris",downloadPdfUrl:"/chapter/pdf-download/23125",previewPdfUrl:"/chapter/pdf-preview/23125",authors:[{id:"52641",title:"Dr.",name:"Christina",surname:"Voulgari",slug:"christina-voulgari",fullName:"Christina Voulgari"},{id:"55077",title:"Prof.",name:"Nicholas",surname:"Tentolouris",slug:"nicholas-tentolouris",fullName:"Nicholas Tentolouris"}],corrections:null},{id:"23126",title:"Hypoglycemia in Critically Ill Patients",doi:"10.5772/24387",slug:"hypoglycemia-in-critically-ill-patients",totalDownloads:4603,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Cornelia Hoedemaekers and Johannes van der Hoeven",downloadPdfUrl:"/chapter/pdf-download/23126",previewPdfUrl:"/chapter/pdf-preview/23126",authors:[{id:"57055",title:"Dr.",name:"Cornelia",surname:"Hoedemaekers",slug:"cornelia-hoedemaekers",fullName:"Cornelia Hoedemaekers"},{id:"57082",title:"Prof.",name:"Johannnes",surname:"Van Der Hoeven",slug:"johannnes-van-der-hoeven",fullName:"Johannnes Van Der Hoeven"}],corrections:null},{id:"23127",title:"Inflammation and Hypoglycemia: The Lipid Connection",doi:"10.5772/24133",slug:"inflammation-and-hypoglycemia-the-lipid-connection",totalDownloads:5418,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Oren Tirosh",downloadPdfUrl:"/chapter/pdf-download/23127",previewPdfUrl:"/chapter/pdf-preview/23127",authors:[{id:"55675",title:"Dr.",name:"Oren",surname:"Tirosh",slug:"oren-tirosh",fullName:"Oren Tirosh"}],corrections:null},{id:"23128",title:"Postprandial Hypoglycemia",doi:"10.5772/24413",slug:"postprandial-hypoglycemia",totalDownloads:9144,totalCrossrefCites:0,totalDimensionsCites:1,hasAltmetrics:0,abstract:null,signatures:"Mubeen Khan and Udaya M. Kabadi",downloadPdfUrl:"/chapter/pdf-download/23128",previewPdfUrl:"/chapter/pdf-preview/23128",authors:[{id:"57203",title:"Dr.",name:"Udaya",surname:"Kabadi",slug:"udaya-kabadi",fullName:"Udaya Kabadi"}],corrections:null},{id:"23129",title:"The Role of the Pituitary-Growth Hormone-IGF Axis in Glucose Homeostasis",doi:"10.5772/20698",slug:"the-role-of-the-pituitary-growth-hormone-igf-axis-in-glucose-homeostasis",totalDownloads:3005,totalCrossrefCites:0,totalDimensionsCites:1,hasAltmetrics:0,abstract:null,signatures:"Stephen F. Kemp",downloadPdfUrl:"/chapter/pdf-download/23129",previewPdfUrl:"/chapter/pdf-preview/23129",authors:[{id:"40146",title:"Dr.",name:"Stephen",surname:"Kemp",slug:"stephen-kemp",fullName:"Stephen Kemp"}],corrections:null},{id:"23130",title:"Molecular Mechanism Underlying the Intra-Islet Regulation of Glucagon Secretion",doi:"10.5772/24792",slug:"molecular-mechanism-underlying-the-intra-islet-regulation-of-glucagon-secretion",totalDownloads:3081,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Dan Kawamori and Rohit N. Kulkarni",downloadPdfUrl:"/chapter/pdf-download/23130",previewPdfUrl:"/chapter/pdf-preview/23130",authors:[{id:"59544",title:"Dr.",name:"Dan",surname:"Kawamori",slug:"dan-kawamori",fullName:"Dan Kawamori"},{id:"59547",title:"Dr.",name:"Rohit",surname:"Kulkarni",slug:"rohit-kulkarni",fullName:"Rohit Kulkarni"}],corrections:null},{id:"23131",title:"Insulin Therapy and Hypoglycemia - Present and Future",doi:"10.5772/24844",slug:"insulin-therapy-and-hypoglycemia-present-and-future",totalDownloads:2149,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Simona Cernea, Ron Nagar, Gabriel Bitton and Itamar Raz",downloadPdfUrl:"/chapter/pdf-download/23131",previewPdfUrl:"/chapter/pdf-preview/23131",authors:[{id:"59882",title:"Prof.",name:"Itamar",surname:"Raz",slug:"itamar-raz",fullName:"Itamar Raz"},{id:"119649",title:"Dr.",name:"Simona",surname:"Cernea",slug:"simona-cernea",fullName:"Simona Cernea"},{id:"119650",title:"Dr.",name:"Ron",surname:"Nagar",slug:"ron-nagar",fullName:"Ron Nagar"},{id:"119651",title:"Dr.",name:"Gabriel",surname:"Bitton",slug:"gabriel-bitton",fullName:"Gabriel Bitton"}],corrections:null},{id:"23132",title:"Prevention of Hospital Hypoglycemia by Algorithm Design: A Programming Pathway for Electronic Order Entry",doi:"10.5772/23705",slug:"prevention-of-hospital-hypoglycemia-by-algorithm-design-a-programming-pathway-for-electronic-order-e",totalDownloads:5396,totalCrossrefCites:0,totalDimensionsCites:2,hasAltmetrics:1,abstract:null,signatures:"Susan S. Braithwaite, Lisa Clark, Lydia Dacenko-Grawe, Radha Devi, Josefina Diaz, Mehran Javadi and Harley Salinas",downloadPdfUrl:"/chapter/pdf-download/23132",previewPdfUrl:"/chapter/pdf-preview/23132",authors:[{id:"53252",title:"Dr.",name:"Susan S.",surname:"Braithwaite",slug:"susan-s.-braithwaite",fullName:"Susan S. Braithwaite"},{id:"88903",title:"BSc.",name:"Lydia",surname:"Dacenko-Grawe",slug:"lydia-dacenko-grawe",fullName:"Lydia Dacenko-Grawe"},{id:"88904",title:"Ms.",name:"Lisa",surname:"Clark",slug:"lisa-clark",fullName:"Lisa Clark"},{id:"89303",title:"BSc.",name:"Mehran",surname:"Javadi",slug:"mehran-javadi",fullName:"Mehran Javadi"},{id:"89320",title:"Dr.",name:"Radha",surname:"Devi",slug:"radha-devi",fullName:"Radha Devi"},{id:"90310",title:"Dr.",name:"Josefina",surname:"Diaz",slug:"josefina-diaz",fullName:"Josefina Diaz"},{id:"90311",title:"Mr.",name:"Harley",surname:"Salinas",slug:"harley-salinas",fullName:"Harley Salinas"}],corrections:null},{id:"23133",title:"Hypoglycemia Prevention in Closed-Loop Artificial Pancreas for Patients with Type 1 Diabetes",doi:"10.5772/22647",slug:"hypoglycemia-prevention-in-closed-loop-artificial-pancreas-for-patients-with-type-1-diabetes",totalDownloads:2358,totalCrossrefCites:2,totalDimensionsCites:5,hasAltmetrics:0,abstract:null,signatures:"Amjad Abu-Rmileh and Winston Garcia-Gabin",downloadPdfUrl:"/chapter/pdf-download/23133",previewPdfUrl:"/chapter/pdf-preview/23133",authors:[{id:"48551",title:"Dr.",name:"Amjad",surname:"Abu-Rmileh",slug:"amjad-abu-rmileh",fullName:"Amjad Abu-Rmileh"},{id:"56631",title:"Dr.",name:"Winston",surname:"Garcia-Gabin",slug:"winston-garcia-gabin",fullName:"Winston Garcia-Gabin"}],corrections:null},{id:"23134",title:"Glucose Homeostasis – Mechanism and Defects",doi:"10.5772/22905",slug:"glucose-homeostasis-mechanism-and-defects",totalDownloads:30046,totalCrossrefCites:5,totalDimensionsCites:13,hasAltmetrics:0,abstract:null,signatures:"Leszek Szablewski",downloadPdfUrl:"/chapter/pdf-download/23134",previewPdfUrl:"/chapter/pdf-preview/23134",authors:[{id:"49739",title:"Dr.",name:"Leszek",surname:"Szablewski",slug:"leszek-szablewski",fullName:"Leszek Szablewski"}],corrections:null},{id:"23135",title:"Neurologic Manifestations of Hypoglycemia",doi:"10.5772/22204",slug:"neurologic-manifestations-of-hypoglycemia",totalDownloads:6941,totalCrossrefCites:0,totalDimensionsCites:4,hasAltmetrics:0,abstract:null,signatures:"William P. Neil and Thomas M. Hemmen",downloadPdfUrl:"/chapter/pdf-download/23135",previewPdfUrl:"/chapter/pdf-preview/23135",authors:[{id:"46773",title:"Dr.",name:"Thomas",surname:"Hemmen",slug:"thomas-hemmen",fullName:"Thomas Hemmen"},{id:"57448",title:"Dr.",name:"William",surname:"Neil",slug:"william-neil",fullName:"William Neil"}],corrections:null},{id:"23136",title:"Hypoglycemia Associated with Type B Insulin Resistance",doi:"10.5772/24364",slug:"hypoglycemia-associated-with-type-b-insulin-resistance",totalDownloads:3151,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Hiroyuki Tamemoto, Shin-ichi Tominaga Hideo Toyoshima, San-e’ Ishikawa and Masanobu Kawakami",downloadPdfUrl:"/chapter/pdf-download/23136",previewPdfUrl:"/chapter/pdf-preview/23136",authors:[{id:"56898",title:"Dr.",name:"Hiroyuki",surname:"Tamemoto",slug:"hiroyuki-tamemoto",fullName:"Hiroyuki Tamemoto"},{id:"62348",title:"Prof.",name:"Masanobu",surname:"Kawakami",slug:"masanobu-kawakami",fullName:"Masanobu Kawakami"},{id:"108207",title:"Prof.",name:"Shin-Ichi",surname:"Tominaga",slug:"shin-ichi-tominaga",fullName:"Shin-Ichi Tominaga"},{id:"108209",title:"Prof.",name:"San-E",surname:"Ishikawa",slug:"san-e-ishikawa",fullName:"San-E Ishikawa"}],corrections:null},{id:"23137",title:"Role of Incretin, Incretin Analogues and Dipeptidyl Peptidase 4 Inhibitors in the Pathogenesis and Treatment of Diabetes Mellitus",doi:"10.5772/20616",slug:"role-of-incretin-incretin-analogues-and-dipeptidyl-peptidase-4-inhibitors-in-the-pathogenesis-and-tr",totalDownloads:1767,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Athanasia K. Papazafiropoulou, Marina S. Kardara and Stavros I. Pappas",downloadPdfUrl:"/chapter/pdf-download/23137",previewPdfUrl:"/chapter/pdf-preview/23137",authors:[{id:"39771",title:"Dr.",name:"Athanasia",surname:"Papazafiropoulou",slug:"athanasia-papazafiropoulou",fullName:"Athanasia Papazafiropoulou"},{id:"47512",title:"Dr.",name:"Marina",surname:"Kardara",slug:"marina-kardara",fullName:"Marina Kardara"},{id:"47513",title:"Dr.",name:"Stavros",surname:"Pappas",slug:"stavros-pappas",fullName:"Stavros Pappas"}],corrections:null},{id:"23138",title:"Zinc Translocation Causes Hypoglycemia-Induced Neuron Death",doi:"10.5772/20682",slug:"zinc-translocation-causes-hypoglycemia-induced-neuron-death",totalDownloads:2065,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Sang Won Suh",downloadPdfUrl:"/chapter/pdf-download/23138",previewPdfUrl:"/chapter/pdf-preview/23138",authors:[{id:"40070",title:"Dr.",name:"Sang Won",surname:"Suh",slug:"sang-won-suh",fullName:"Sang Won Suh"}],corrections:null},{id:"23139",title:"Congenital Hyperinsulinism",doi:"10.5772/21990",slug:"congenital-hyperinsulinism",totalDownloads:2445,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Xinhua Xiao and Si Chen",downloadPdfUrl:"/chapter/pdf-download/23139",previewPdfUrl:"/chapter/pdf-preview/23139",authors:[{id:"45776",title:"Prof.",name:"Xinhua",surname:"Xiao",slug:"xinhua-xiao",fullName:"Xinhua Xiao"},{id:"138268",title:"Dr.",name:"Si",surname:"Chen",slug:"si-chen",fullName:"Si Chen"}],corrections:null},{id:"23140",title:"Diabetes Control and Hypoglycemia",doi:"10.5772/23614",slug:"diabetes-control-and-hypoglycemia",totalDownloads:2193,totalCrossrefCites:0,totalDimensionsCites:0,hasAltmetrics:0,abstract:null,signatures:"Paul Norwood and Alex Fogel",downloadPdfUrl:"/chapter/pdf-download/23140",previewPdfUrl:"/chapter/pdf-preview/23140",authors:[{id:"52801",title:"Dr.",name:"Paul",surname:"Norwood",slug:"paul-norwood",fullName:"Paul Norwood"},{id:"52805",title:"Mr",name:"Alexander",surname:"Fogel",slug:"alexander-fogel",fullName:"Alexander Fogel"}],corrections:null}],productType:{id:"1",title:"Edited Volume",chapterContentType:"chapter",authoredCaption:"Edited by"},subseries:null,tags:null},relatedBooks:[{type:"book",id:"338",title:"Hypoglycemia",subtitle:"Causes and Occurrences",isOpenForSubmission:!1,hash:"9d84665f80961e5d8e198fe59eec801d",slug:"hypoglycemia-causes-and-occurrences",bookSignature:"Everlon Cid Rigobelo",coverURL:"https://cdn.intechopen.com/books/images_new/338.jpg",editedByType:"Edited by",editors:[{id:"39553",title:"Prof.",name:"Everlon",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"3145",title:"Probiotics",subtitle:null,isOpenForSubmission:!1,hash:"1cee7f61dbc8944dd3c866d775587ccc",slug:"probiotics",bookSignature:"Everlon Cid Rigobelo",coverURL:"https://cdn.intechopen.com/books/images_new/3145.jpg",editedByType:"Edited by",editors:[{id:"39553",title:"Prof.",name:"Everlon",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"2991",title:"Probiotic in Animals",subtitle:null,isOpenForSubmission:!1,hash:"41c671520cbe897b519283a32318deaf",slug:"probiotic-in-animals",bookSignature:"Everlon Cid Rigobelo",coverURL:"https://cdn.intechopen.com/books/images_new/2991.jpg",editedByType:"Edited by",editors:[{id:"39553",title:"Prof.",name:"Everlon",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"5363",title:"Plant Growth",subtitle:null,isOpenForSubmission:!1,hash:"80ea578eeb387cbdcb2cae7df4fa3fe7",slug:"plant-growth",bookSignature:"Everlon Cid Rigobelo",coverURL:"https://cdn.intechopen.com/books/images_new/5363.jpg",editedByType:"Edited by",editors:[{id:"39553",title:"Prof.",name:"Everlon",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"6099",title:"Symbiosis",subtitle:null,isOpenForSubmission:!1,hash:"309fee35292674da9b56e03269ca4291",slug:"symbiosis",bookSignature:"Everlon Cid Rigobelo",coverURL:"https://cdn.intechopen.com/books/images_new/6099.jpg",editedByType:"Edited by",editors:[{id:"39553",title:"Prof.",name:"Everlon",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8004",title:"Nitrogen Fixation",subtitle:null,isOpenForSubmission:!1,hash:"02f39c8365ba155d1c520184c2f26976",slug:"nitrogen-fixation",bookSignature:"Everlon Cid Rigobelo and Ademar Pereira Serra",coverURL:"https://cdn.intechopen.com/books/images_new/8004.jpg",editedByType:"Edited by",editors:[{id:"39553",title:"Prof.",name:"Everlon",surname:"Rigobelo",slug:"everlon-rigobelo",fullName:"Everlon Rigobelo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"2666",title:"Diabetes Mellitus",subtitle:"Insights and Perspectives",isOpenForSubmission:!1,hash:"49a714ae0be8a338523befe4ffc9352f",slug:"diabetes-mellitus-insights-and-perspectives",bookSignature:"Oluwafemi O. Oguntibeju",coverURL:"https://cdn.intechopen.com/books/images_new/2666.jpg",editedByType:"Edited by",editors:[{id:"32112",title:"Prof.",name:"Oluwafemi",surname:"Oguntibeju",slug:"oluwafemi-oguntibeju",fullName:"Oluwafemi Oguntibeju"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"3829",title:"Antioxidant-Antidiabetic Agents and Human Health",subtitle:null,isOpenForSubmission:!1,hash:"148f7976e4249aa1f0180cca370e36ce",slug:"antioxidant-antidiabetic-agents-and-human-health",bookSignature:"Oluwafemi Oguntibeju",coverURL:"https://cdn.intechopen.com/books/images_new/3829.jpg",editedByType:"Edited by",editors:[{id:"32112",title:"Prof.",name:"Oluwafemi",surname:"Oguntibeju",slug:"oluwafemi-oguntibeju",fullName:"Oluwafemi Oguntibeju"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"3266",title:"Type 1 Diabetes",subtitle:null,isOpenForSubmission:!1,hash:"21684525ccb8c6acd89bc43ce177f90b",slug:"type-1-diabetes",bookSignature:"Alan P. Escher and Alice Li",coverURL:"https://cdn.intechopen.com/books/images_new/3266.jpg",editedByType:"Edited by",editors:[{id:"46023",title:"Dr.",name:"Alan",surname:"Escher",slug:"alan-escher",fullName:"Alan Escher"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"3857",title:"Glucose Homeostasis",subtitle:null,isOpenForSubmission:!1,hash:"7d6d19b59871b430fbcfc4bd297e242d",slug:"glucose-homeostasis",bookSignature:"Leszek Szablewski",coverURL:"https://cdn.intechopen.com/books/images_new/3857.jpg",editedByType:"Edited by",editors:[{id:"49739",title:"Dr.",name:"Leszek",surname:"Szablewski",slug:"leszek-szablewski",fullName:"Leszek Szablewski"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}}],ofsBooks:[]},correction:{item:{id:"66068",slug:"addendum-an-overview-of-pet-radiopharmaceuticals-in-clinical-use-regulatory-quality-and-pharmacopeia",title:"Addendum - An Overview of PET Radiopharmaceuticals in Clinical Use: Regulatory, Quality and Pharmacopeia Monographs of the United States and Europe",doi:null,correctionPDFUrl:"https://cdn.intechopen.com/pdfs/66068.pdf",downloadPdfUrl:"/chapter/pdf-download/66068",previewPdfUrl:"/chapter/pdf-preview/66068",totalDownloads:null,totalCrossrefCites:null,bibtexUrl:"/chapter/bibtex/66068",risUrl:"/chapter/ris/66068",chapter:{id:"62269",slug:"an-overview-of-pet-radiopharmaceuticals-in-clinical-use-regulatory-quality-and-pharmacopeia-monograp",signatures:"Ya-Yao Huang",dateSubmitted:"February 25th 2018",dateReviewed:"May 31st 2018",datePrePublished:"November 5th 2018",datePublished:"July 24th 2019",book:{id:"7373",title:"Nuclear Medicine Physics",subtitle:null,fullTitle:"Nuclear Medicine Physics",slug:"nuclear-medicine-physics",publishedDate:"July 24th 2019",bookSignature:"Aamir Shahzad and Sajid Bashir",coverURL:"https://cdn.intechopen.com/books/images_new/7373.jpg",licenceType:"CC BY 3.0",editedByType:"Edited by",editors:[{id:"288354",title:"Dr.",name:"Aamir",middleName:null,surname:"Shahzad",slug:"aamir-shahzad",fullName:"Aamir Shahzad"}],productType:{id:"1",title:"Edited Volume",chapterContentType:"chapter",authoredCaption:"Edited by"}},authors:[{id:"247754",title:"Prof.",name:"Ya-Yao",middleName:null,surname:"Huang",fullName:"Ya-Yao Huang",slug:"ya-yao-huang",email:"careyyh@ntuh.gov.tw",position:null,institution:null}]}},chapter:{id:"62269",slug:"an-overview-of-pet-radiopharmaceuticals-in-clinical-use-regulatory-quality-and-pharmacopeia-monograp",signatures:"Ya-Yao Huang",dateSubmitted:"February 25th 2018",dateReviewed:"May 31st 2018",datePrePublished:"November 5th 2018",datePublished:"July 24th 2019",book:{id:"7373",title:"Nuclear Medicine Physics",subtitle:null,fullTitle:"Nuclear Medicine Physics",slug:"nuclear-medicine-physics",publishedDate:"July 24th 2019",bookSignature:"Aamir Shahzad and Sajid Bashir",coverURL:"https://cdn.intechopen.com/books/images_new/7373.jpg",licenceType:"CC BY 3.0",editedByType:"Edited by",editors:[{id:"288354",title:"Dr.",name:"Aamir",middleName:null,surname:"Shahzad",slug:"aamir-shahzad",fullName:"Aamir Shahzad"}],productType:{id:"1",title:"Edited Volume",chapterContentType:"chapter",authoredCaption:"Edited by"}},authors:[{id:"247754",title:"Prof.",name:"Ya-Yao",middleName:null,surname:"Huang",fullName:"Ya-Yao Huang",slug:"ya-yao-huang",email:"careyyh@ntuh.gov.tw",position:null,institution:null}]},book:{id:"7373",title:"Nuclear Medicine Physics",subtitle:null,fullTitle:"Nuclear Medicine Physics",slug:"nuclear-medicine-physics",publishedDate:"July 24th 2019",bookSignature:"Aamir Shahzad and Sajid Bashir",coverURL:"https://cdn.intechopen.com/books/images_new/7373.jpg",licenceType:"CC BY 3.0",editedByType:"Edited by",editors:[{id:"288354",title:"Dr.",name:"Aamir",middleName:null,surname:"Shahzad",slug:"aamir-shahzad",fullName:"Aamir Shahzad"}],productType:{id:"1",title:"Edited Volume",chapterContentType:"chapter",authoredCaption:"Edited by"}}},ofsBook:{item:{type:"book",id:"11540",leadTitle:null,title:"Advances in Fusion Energy Research - Theory, Models, Algorithms, and Applications",subtitle:null,reviewType:"peer-reviewed",abstract:"
\r\n\tFusion power can make an enduring contribution to future energy supply as it offers benefits of both renewable and resource-limited energy sources. However, several challenges still need to be faced and solved until fusion energy may become a commercial reality. The extreme temperatures and strong magnetic fields inside a nuclear fusion reactor can make the power generation process highly prone to several instabilities. An important concern is the risk of disruptions due to sudden changes in plasma positions during plasma instabilities, resulting in the generation of eddy currents on interaction with the induced currents. Materials need to have extremely high heat tolerances and low enough vapor pressure to reduce plasma contamination. The future of nuclear fusion as an efficient alternative energy source depends largely on techniques that enable us to control these instabilities.
\r\n\r\n\tModeling and experiments attempt to overcome some of the hindrances posed by these complexities. Measurements of eddy currents are essential to design a mechanical structure that can withstand the electromagnetic force generated. It requires a detailed description of the nuclear reactor, leading to extremely demanding computational models. The design of a feedback controller to measure and counteract the unstable modes of evolution of the plasma requires rather accurate response models of the overall system plasma plus conductors.
\r\n\r\n\tThis book intends to provide the reader with a rather comprehensive overview of the current state-of-the-art in this fascinating and critically important field of physics and engineering, presenting some of the most recent developments of the last years relating to theory, modeling, algorithms, experiments, and applications.
\r\n\t
Stating that statistical methods are useful in machine learning is analogous to saying that wood working methods are helpful for a carpenter. Statistics is the foundation of machine learning. However not all machine learning methods have been said to have derived from statistics. To begin with let us take a look at what statistics and machine learning means.
Statistics is extensively used in areas of science and finance and in the industry. Statistics is known to be mathematical science and not just mathematics. It is said to have been originated in seventeenth century. It consists of data collection, organizing the data, analyzing the data, interpretation and presentation of data. Statistical methods are being used since a long time in various fields to understand the data efficiently and to gain an in-depth analysis of the data [1].
On the other hand, machine learning is a branch of computer science which uses statistical abilities to learn from a particular dataset [2]. It was invented in the year 1959. It learns using algorithm and then has the ability to predict based on what it has been fed with. Machine learning gives out detailed information than statistics [3].
Most of the techniques of machine learning derive their behavior from statistics. However not many are familiar with this since both of them have their own jargons. For instance learning in statistics is called as fitting, supervised learning from machine learning is called as regression. Machine learning is a subfield of computer science and artificial intelligence. Machine learning is said to be a subdivision of computer science and artificial intelligence. It does use fewer assumptions than statistics. Machine learning unlike statistics deals with large amount of data and it also requires minimum human effort since most of its computation is done by the machine or the computer itself. Machine learning unlike statistics has a strong predicting power than statistics. Depending on the type of data machine learning can be categorized into supervised machine learning, unsupervised machine learning and reinforcement learning [4].
There seems to be analogy between machine learning and statistics. The following picture from textbook shows how statistics and machine learning visualize a model. Table 1 shows how terms of statistics have been coined in machine learning.
Machine learning | Statistics |
---|---|
Network, graphs | Model |
Weights | Parameters |
Learning | Fitting |
Generalization | Tool set performance |
Supervised learning | Regression/classification |
Unsupervised learning | Density estimation, clustering |
Machine learning jargons and corresponding statistics jargons.
To understand how machine learning and statistics come out with the results let’s look at Figure 1. In statistical modeling on the left half of the image, linear regression with two variables is fitting the best plane with fewer errors. In machine learning the right half of the image to fit the model in the best possible way the independent variables have been converted into the square of error terms. That is machine learning strives to get a better fit than the statistical model. In doing so, machine learning minimizes the errors and increases the prediction rates.
Statistical and machine learning method.
Statistics methods are not just useful in training the machine learning model but they are helpful in many other stages of machine learning such as:
Data preparation—where statistics is used for data preprocessing which is later sent to the model. For instance when there are missing values in the dataset, we compute statistical mean or statistical median and fill it in the empty spaces of the dataset. It is recommended that machine learning model should never be fed with a dataset which has empty cells in it. It also used in preprocessing stage to scale the data by which the values are scaled to a particular range by which the mathematical computation becomes easy during the training of machine learning.
Model evaluation—no model is perfect in predicting when it is built for the first time. Simply building the model is not enough. It is vital to check how well is it performing and if not then by how much is it closer to being accurate enough. Hence, we evaluate the model by statistical methods, which tell by how much the result is accurate and a lot many things about the end result obtained. We make use of metrics such as confusion matrix, Kolmogorov Smirnov chart, AUC—ROC, root mean squared error and many metrics to enhance our model.
Model selection—we make use of many algorithms to train the algorithm and there is a chance of selecting only one which gives out accurate results when compared to others. The process of selecting the right solution for this is called model selection. Two of the statistical methods can be used to select the appropriate model such as statistical hypothesis test and estimation statistics [5].
Data selection—some datasets carry a lot of features with them. Of many features, it may happen so that only some contribute significantly in estimation of the result. Considering all the features becomes computationally expensive and as well as time consuming. By making use of statistics concepts we can eliminate the features which do not contribute significantly in producing the result. That is it helps in finding out the dependent variables or features for any result. But it is important to note that this method requires careful and skilled approach. Without which it may lead to wrong results.
In this chapter we take a look at how statistical methods such as, regression and classification are used in machine learning with their own merits and demerits.
Regression is a statistical measure used in finance, investing and many other areas which aims to determine relationship between the dependent variables and ‘n’ number of independent variables. Regression consists of two types:
Linear regression—where one independent variable is used to explain or predict the outcome of the dependent variable.
Multiple regression—where two or more independent variables are used to explain or predict the outcome of the dependent variable.
In statistical modeling, regression analysis consists of set of statistical methods to estimate how the variables are related to each other.
Linear and logistic are the types of regression which are used in predictive modeling [6].
Linear assumes that the relationship between the variables are linear that is they are linearly dependent. The input variables consist of variables X1, X2, …, Xn (where n is a natural number).
Linear models were developed long time ago but till date they are able to produce significant results. That is even in the modern computer’s era they are well off. They are widely used because they are not complex in nature. In prediction, they can even out perform complex nonlinear models.
There are ‘n’ number of regressions that can be performed. We look at the most widely used five types of regression techniques. They are:
Linear regression
Logistic regression
Polynomial regression
Stepwise regression
Ridge regression
Any regression method would involve the following:
The unknown variables is denoted by beta
The dependent variables also known as output variable
The independent variables also known as input variables
It is denoted in the form of function as:
It is the most widely used regression type by far. Linear regression establishes a relationship between the input variables (independent variables) and the output variable (dependent variable).
It assumes that the output variable is a combination of the input variables. A linear regression line is represented by Y = a + bX, where X is the explanatory variable and Y is the dependent variable. The slope of the line is ‘b’, and ‘a’ is the intercept (the value of y when x = 0).
A line regression is represented by the equation:
where X indicates independent variables and ‘Y’ is the dependent variable [7]. This equation when plotted on a graph is a line as shown below in Figure 2.
Linear regression on a dataset.
However, linear regression makes the following assumptions:
That there is a linear relationship
There exists multivariate normality
There exists no multi collinearity or little multicollinearity among the variables
There exists no auto-correlation between the variables
No presence of homoscedasticity
It is fast and easy to model and it is usually used when the relationship to be modeled is not complex. It is easy to understand. However linear regression is sensitive to outliners.
Note: In all of the usages stated in this chapter, we have assumed the following:
The dataset has been divided into training set (denoted by X) and test set (denoted by y_test)
The regression object “reg” has been created and exists.
We have used the following libraries:
Scipy and Numoy for numerical calculations
Pandas for dataset handling
Scikit-learn to implement the algorithm, to split the dataset and various other purposes.
Usage of linear regression in python:
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
#Declare the linear regression function
reg=linear_model.LinearRegression()
#call the method
reg.fit(height,weight)
#to check slope and intercept
m=reg.coef_[0]
b=reg.intercept_
print("slope=",m, "intercept=",b)
# check the accuracy on the training set
reg.score(X, y)
Logistic regression is used when the dependent variable is binary (True/False) in nature. Similarly the value of y ranges from 0 to 1 (Figure 3) and it is represented by the equation:
Standard logistic function.
Logistic regression is used in classification problems. For example to classify emails as spam or not and to predict whether the tumor is malignant or not. It is not mandatory that the input variables have linear relationship to the output variable [8]. The reason being that it makes us of nonlinear log transformation to the predicted odds. It is advised to make use of only the variables which are powerful predictors to increase the algorithms performance.
However, it is important to note the following while making use of logistic regression:
Doesn’t handle large number of categorical features.
The non-linear features should be transformed before using them.
Usage of logistic regression in python:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
# instantiate a logistic regression model, and fit with X and y
reg = LogisticRegression()
reg = model.fit(X, y)
# check the accuracy on the training set
reg.score(X, y)
It is a type of regression where the independent variable power is greater than 1. Example:
The plotted graph is usually a curve in nature as shown in Figure 4.
Plotted graph is looks as curve in nature.
If the degree of the equation is 2 then it is called quadratic. If 3 then it is called cubic and if it is 4 it is called quartic. Polynomial regressions are fit with the method of least squares. Since the least squares minimizes the variance of the unbiased estimators of all the coefficients which are done under the conditions of Gauss-Markov theorem. Although we may get tempted to fit a higher degree polynomial so that we could get a low error, it may cause over-fitting [9].
Some guidelines which are to be followed are:
The model is more accurate when it fed with large number of observations.
Not a good thing to extrapolate beyond the limits of the observed values.
Values for the predictor shouldn’t be large else they will cause overflow with higher degree.
Usage of polynomial regression in python:
from sklearn.preprocessing import PolynomialFeatures
import numpy as np
#makes use of a pre-processor called degree for the function
reg = PolynomialFeatures(degree=2)
reg.fit_transform(X)
reg.score(X, y)
This type of regression is used when we have multiple independent variables. To select the variables which are independent an automatic process is used. If used in the right way it puts more power and presents us ton of information. It can be used when the number of variables is too many. However if it is used haphazardly it may affect the models performance.
We make use of the following scores to help us find out the independent variables which contribute to the output variable significantly—R-squared, Adj. R-squared, F-statistic, Prob (F-statistic), Log-Likelihood, AIC, BIC and many more.
It can be performed by any of the following ways:
Forward selection—where we start by adding the variables to the set and check how affects the scores.
Backward selection—we start by taking all the variables to the set and start eliminating them one by one by looking at the score after each elimination.
Bidirectional selection—a combination of both the methods mentioned above.
The greatest limitation of using step-wise regression is that the each instance or sample must have at least five attributes. Below which it has been observed that the algorithm doesn’t perform well [10].
Code to implement Backward Elimination algorithm:
Assume that the dataset consists of 5 columns and 30 rows, which are present in the variable ‘X’ and let the expected results contain in the variable ‘y’. Let ‘X_opt’ contain the independent variables which are used to determine the value of ‘y’.
We are making use of a package called statsmodels, which is used to estimate the model and to perform statistical tests.
#import stats models package
import statsmodels.formula.api as sm
#since it is a polynomial add a column of 1s to the left
X = np.append (arr = np.ones([30,1]).astype(int), values = X, axis = 1)
#Let X-opt contain the independent variables only and Let y contain the output variable
X_opt = X[:,[0,1,2,3,4,5]]
#assign y to endog and X_opt to exog
regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit()
regressor_OLS.summary()
The above code outputs the summary and based on it the variable which should be eliminated should be decided. Once decided remove the variable from ‘X-opt’.
It is used to handle high dimensionality of the dataset.
It can be used to analyze the data in detail. It is a technique which is used to get rid of multi collinearly. That is the independent values may be highly correlated. It adds a degree of bias due to which it reduces the standard errors.
The multi collinearity of the data can be inspected by correlation matrix. Higher the values, more the multi collinearity. It can also be used when number of predictor variables in the dataset exceeds the number of instances or observations [11].
The equation for linear regression is
This equation also contains error. That is it can be expressed as
Error with mean zero and known variance.
Ridge regression is known to shrink the size by imposing penalty on the size. It is also used to control the variance.
In (Figure 5) how ridge regression looks geometrically.
Ridge and OLS.
Usage of ridge regression in python:
from sklearn import linear_model
reg = linear_model.Ridge (alpha = .5)
reg.fit ([[0, 0], [0, 0], [1, 1]], [0, .1, 1])
Ridge(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver=\'auto\', tol=0.001)
#to return the co-efficient and intercept
reg.coef_
reg.intercept_
Least absolute shrinkage and selection operator is also known as LASSO. Lasso is a linear regression that makes use of shrinkage. It does so by shrinking the data values toward the mean or a central point. This is used when there are high levels of multi collinearity [12].
It is similar to ridge regression and in addition it can reduce the variability and improves the accuracy of linear regression models.
It is used for prostate cancer data analysis and other cancer data analysis.
Important points about LASSO regression:
It helps in feature extraction by shrinking the co-efficient to zero.
It makes use of L1 regularization.
In the data if the predictors are have high correlation, the algorithm selects only one of the predictors discards the rest.
Code to implement in python:
from sklearn import linear_model
clf = linear_model.Lasso(alpha = 0.1)
clf.fit()
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, positive=False, precompute=False, random_state=None,
selection=\'cyclic\', tol=0.0001, warm_start=False)
#to return the co-efficent and intercept
print(clf.coef_)
print(clf.intercept_)
A classification task is when the output is of the type “category” such as segregating data with respect to some property. In machine learning and statistics, classification consists of categorizing the new data to a particular category where it fits in on the basis of the data which has been used to train the model. Examples of tasks which make use of classification techniques are classifying emails as spam or not, detecting a disease on plants, predicting whether it will rain on some particular day, predicting the house prices based on the area it is located.
In terms of machine learning classification techniques fall under supervised learning [13].
The categories may be either:
categorical (example: blood groups of humans—A, B, O)
ordinal (example: high, medium or low)
integer valued (example: occurrence of a letter in a sentence)
Real valued
The algorithms which make use of this concept in machine learning and classify the new data are called as “Classifiers.” Algorithms always return a probability score of belonging to the class of interest. That is considered an example where we are required to classify a gold ornament. Now when we input the image to the machine learning model the algorithms returns the probability value for each category, such as for if it is a ring the probability value may be higher than 0.8 if it not a necklace it may return less than 0.2, etc.
Higher the value more likely it is for it to belong to the particular group.
We make use of the following approach to build a machine learning classifier:
Pick a cut off probability above which we consider a record to belong to that class.
Estimate that a new observation belongs to a class.
If the obtained probability is above the cut off probability, assign the new observation to that class.
Classifiers are of two types: linear and nonlinear classifiers.
We now take a look at various classifiers are also statistical techniques:
Naive Bayes
stochastic gradient dissent (SGD)
K-nearest neighbors
decision trees
random forest
support vector machine
In machine learning, these classifiers belong to “probabilistic classifiers.” This algorithm makes use of Bayes’ theorem with strong independence assumptions between the features. Although Naive Bayes were introduced in the early 1950s, they are still being used today [14].
Given a problem instance to be classified, represented by a vector
Which represent ‘n’ features.
We can observe that in the above formula that if the number of features is more or if a feature accommodates a large number of values, then it becomes infeasible. Therefore we rewrite the formula based on Bayes theorem as:
Makes two “naïve” assumptions over attributes:
All attributes are a priori equally important
All attributes are statistically independent (value of one attribute is
not related to a value of another attribute)
This classifier makes two assumptions:
All attributes are equally important
All attributes are not related to another attribute
There are three types of naive Bayes algorithms, which can be used: GaussianNB, BernoulliNB, and MultinomialNB.
Usage of naive Bayes in python:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
reg= GaussianNB()
reg.fit(X,y)
reg.predict(X_test)
reg.score()
An example of linear classifier which implements regularized linear model (Figure 6) with stochastic gradient dissent. Stochastic gradient descent (often shortened to SGD), also known as incremental gradient descent, is an iterative method to optimize a differentiable objective function, a stochastic approximation of gradient descent optimization [15]. Although SGD has been a part of machine learning since ages it wasn’t extensively used until recently.
In linear regression algorithm, we make us of least squares to fit the line. To ensure that the error is low we use gradient descent. Although gradient descent does the job it can’t handle big tasks hence we use stochastic gradient classifier. SGD calculates the derivative of each training data and also calculates the update within no time.
The advantages of using SGD classifier are that they are efficient and they are easy to implement.
Feature scaling classifier.
However it is sensitive to feature scaling.
Usage of SGD classifier:
from sklearn.linear_model import SGDClassifier
X = [[0., 0.], [1., 1.]]
y = [0, 1]
clf = SGDClassifier (loss = "hinge", penalty = "l2")
clf.fit(X, y)
#to predict the values
clf.fit(X_test)
Also known as k-NN is a method used to classify as well as for regression. The input consists of k number of closest training examples. It is also referred as lazy learning since the training phase doesn’t require a lot of effort.
In k-NN an object’s classification is solely dependent on the majority vote of the object’s neighbors. That is the outcome is based on the presence of the neighbors. The object is assigned to the class most common among its k nearest neighbors. If the value of k is equal to 1 then it’s assigned to its nearest neighbor. Simply put, the k-NN algorithm is entirely dependent on the neighbors of the object to be classified. Greater the influence of a neighbor, the object is assigned to it. It is termed as simplest machine learning algorithm among all the algorithms [16].
Let us consider an example where the green circle is the object which is to be classified as shown in Figure 7. Let us assume that there are two circles—the solid circle and the dotted circle.
K-Neighbors.
As we know that there are two classes class 1 (blue squares) and class 2 (red squares). If we consider only the inner circle that is the solid circle then there are two objects of red circle existing which dominates the number of blue squares due to which the new object is classified to Class 1. But if we consider the dotted circle, the number of blue circle dominates since there are more number of blue squares due to which the object is classified to Class 2 [17].
However, the cost of learning process is zero.
The algorithm may suffer from curse of dimensionality since the number of dimensions greatly affects its performance. When the dataset is very large the computation becomes very complex since the algorithm takes time to look out for its neighbors. If there are many dimensions then the samples nearest neighbors can be far away. To avoid curse of dimensionality dimension reduction is usually performed before applying k-NN algorithm to the data.
Also the algorithm may not perform well with categorical data since it is difficult to find the distance between the categorical features.
Usage in python:
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier (n_neighbors=5)
classifier.fit(X_train, y_train)
Decision trees are considered to be most popular classification algorithms while classifying data. Decision trees are a type of supervised algorithm where the data is split based on certain parameters. The trees consist of decision nodes and leaves [18].
The decision tree consists of a root tree from where the tree generates and this root tree doesn’t have any inputs. It is the point from which the tree originates. All the other nodes except the root node have exactly one incoming node. The other nodes except the root node are called leaves. Below is the example of a decision tree an illustration of how the decision tree looks like as shown in Figure 8.
Typical decision tree.
“Is sex male” is the root node from where the tree originates. Depending on the condition the tree further bifurcates into subsequent leaf nodes. Few more conditions like “is Age >9.5?” are applied by which the depth of the node goes on increasing. As the number of leaf nodes increase the depth of the tree goes on increasing. The leaf can also hold a probability vector.
Decision tree algorithms implicitly construct a decision tree for any dataset.
The goal is to construct an optimal decision tree by minimalizing the generalization error. For any tree algorithm, it can be tuned by making changes to parameters such as “Depth of the tree,” “Number of nodes,” “Max features.” However construction of a tree by the algorithm can get complex for large problems since the number of nodes increase as well as the depth of the tree increases.
Advantages of this tree are that they are simple to understand and can be easily interpreted. It also requires little data preparation. The tree can handle both numerical and categorical data unlike many other algorithms. It also easy to validate the decision tree model using statistical testes. However, disadvantages of the trees are that they can be complex in nature for some cases which won’t generalize the data well. They are unstable in nature since if there are small variations in data they may change the structure of the tree completely.
Usage in python:
from sklearn.neighbors import tree
classifier = tree.DecisionTreeClassifier()
classifier.fit(X_train, y_train)
clf.predict(X_test)
These are often referred as ensemble algorithms since these algorithms combine the use of two or more algorithms. They are improved version of bagged decision trees. They are used for classification, regression, etc.
Random forest creates n number of decision trees from a subset of the data. On creating the trees it aggregates the votes from the different trees and then decides the final class of the sample object. Random forest is used in recommendation engines, image classification and feature selection [19].
The process consists of four steps:
It selects random samples from the dataset.
For every dataset construct a dataset and then predict from every decision tree.
For every predicted result perform vote.
Select the prediction which has the highest number of votes.
Random forest’s default parameters often produce a good result in most of the cases. Additionally, one can make changes to achieve desired results. The parameters in Random Forest which can be used to tune the algorithm which can be used to give better and efficient results are:
Increasing the predictive power by increasing “n_estimators” by which the number of tress which will be built can be altered. “max_features” parameter can also be adjusted which is the number of features which are used to train the algorithm. Another parameter which can be adjusted is “min_sample_leaf” which is the number of leafs that are used to split the internal node.
To increase the model’s speed, “n_jobs” parameter can be adjusted which is the number of processors it can use. To use as many as needed “−1” can be specified which signifies that there is no limit.
Due to large number of decision trees random forest is highly accurate. Since it takes the average of all the predictions which are computed the algorithm doesn’t suffer from over fitting. Also it does handle missing values from the dataset. However, the algorithm is takes time to compute since it takes time to build trees and take the average of the predictions and so on.
One of the real time examples where random forest algorithm can be used is predicting a person’s systolic blood pressure based on the person’s height, age, weight, gender, etc.
Random forests require very little tuning when compared to other algorithms. The main disadvantage of random forest algorithm is that increased number of tress can make the process computationally expensive and lead to inaccurate results.
Usage in python:
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(X, y)
clf.predict(X_test)
Support vector machines also known as SVMs or support vector networks fall under supervised learning. They are used for classification as well as regression purposes. Support vectors are the data points which lie close to the hyper plane. When the data is fed to the algorithm the algorithm builds a classifier which can be used to assign new examples to one class or the other [20]. A SVM consists of points in space separated by a gap which is as wide as possible. When a new sample is encountered it maps it to the corresponding category.
Perhaps when the data is unlabeled it becomes difficult for the supervised SVM to perform and this is where unsupervised method of classifying is required.
A SVM constructs a hyper plane which can be used for classification, regression and many other purposes. A good separation can be achieved when the hyper plane has the largest distance to the nearest training point of a class.
In (Figure 9) H1 line doesn’t separate, while H2 separates but the margin is very small whereas H3 separates such as the distance between the margin and the nearest point is maximum when compared to H1 and H2.
Hyper plane construction and H1, H2 and H3 line separation.
SVMs can be used in a variety of applications such as:
They are used to categorize text, to classify images, handwritten images can be recognized, and they are also used in the field of biology.
SVMs can be used with the following kernels:
Polynomial kernel SVM
Linear kernel SVM
Gaussian kernel SVM
Gaussian radial basis function SVM (RBF)
The advantages of SVM are:
Effective in high dimensional data
It is memory efficient
It is versatile
It may be difficult for SVM to classify at times due to which the decision boundary is not optimal. For example, when we want to plot the points randomly distributed on a number line.
It is almost impossible to separate them. So in such cases we transform the dataset by applying 2D or 3D transformations by using a polynomial function or any other appropriate function. By doing so it becomes easier to draw a hyper plane.
When the number of features is much greater than number of samples it doesn’t perform well with the default parameters.
Usage of SVM in python:
from sklearn import svm
clf = svm.SVC()
clf.fit(X,y)
clf.predict(X_test)
It is evident from the above regression and classification techniques are strongly influenced by statistics. The methods have been derived from statistical methods which existed since a long time. Statistical methods also consist of building models which consists of parameters and then fitting it. However not all the methods which are being used derive their nature from statistics. Not all statistical methods are being used in machine learning. Extensive research in the field of statistical methods may give out new set methods which can be used in machine learning apart from the existing statistical methods which are being used today. It can also be stated that machine learning to some extent is a form of ‘Applied Statistics.’
Vaginal delivery is defined as a natural birth process which does not usually require significant medical intervention [1]. It is the birth of offspring in mammals or babies in humans, through the vagina, also known as the “birth canal”. Improvement of normal vaginal delivery can be made through proper management of normal labour, guided by current knowledge [1]. Most women deliver vaginally although the percentage of operative deliveries has increased from 21 percent in 1996 to 30 percent in 2005 respectively [1]. In the year 2013, out of the nearly four million births in the United States, there were approximately three million were vaginal deliveries [2]. In Australia in 2009, 70 percent of women delivered vaginally, of which 58.1% had spontaneous vaginal delivery [3]. In anticipating complications and preparing for vaginal delivery, accurate pregnancy dating is very essential [1]. There are relatively few absolute contraindications to vaginal delivery, meaning that most women deliver vaginally.
Most health experts, including World Health Organization (WHO), do recommend vaginal delivery for women whose babies have reached full term. In comparison to other methods of childbirth, vaginal delivery is the simplest process of delivery [4].
There are different types of vaginal deliveries [5]; these are:
As seen earlier (section 2b), Assisted vaginal delivery (AVD), also called instrumental vaginal delivery or operative vaginal delivery occurs when a pregnant woman goes into labor, with or without the use of drugs or other techniques to induce labor and then delivers vaginally but with the use of special instruments such as forceps or a vacuum extractor.
Henceforth, operative vaginal delivery involves application of forceps or a vacuum extractor to the fetal head to assist during the second stage of labour and facilitate delivery.
In United States of America (USA), assisted vaginal delivery is done in about 3% of all vaginal deliveries as per 2016 report of American College of Obstetricians and Gynecologists [6].
There are two main types of operative vaginal deliveries;
Vacuum-assisted deliveries or vacuum extractions [6] involve attaching a soft cup, which has a handle, to the head of the baby when the baby is in the birth canal or vagina and a hand-held pump is then used to create suction that will help to facilitate delivery [5]. The doctor or midwife pulls the baby gently with each uterine contraction to facilitate delivery. In other words, a vacuum device is a suction-cup with a handle attached to it. This suction-cup is the one placed in the vagina and applied to the top of the baby’s head. The doctor or midwife applies a gentle, well-controlled traction to help guide the baby out of the vagina as the mother keeps pushing with each uterine contraction [6].
The advantage of vacuum-assisted delivery is that this birth option has a lower risk than a Cesarean section in case of prolonged fetal distress.
However, the method carries the risks of minor scalp injuries or trauma and sometimes, bleeding of the scalp.
Forceps-assisted deliveries mean that curved instruments are to be used to facilitate delivery progress of the baby in the birth canal or vagina. Forceps which look like two large spoons are inserted into the vagina and are placed around the baby’s head. They are then used to apply gentle traction to help guide the baby’s head out of the vagina while the mother keeps pushing, with each uterine contraction [6]. Forceps delivery cannot be used if the baby is breech [5]. It can be an option if the mother is too tired or exhausted during pushing or if the baby has to be delivered more speedily than the naturally occurring process.
The choice of devices used in operative vaginal delivery depends predominantly on the doctors or midwife’s preference and experiences and these vary greatly. Operative vaginal deliveries are performed when the station of the fetal head is low, usually two centimeters below the maternal ischial spines [station +2] or lower than that. Minimal traction or rotation is then required to deliver the head [8].
Therefore, before starting an operative vaginal delivery, the doctor or midwife must do the following [9]:
Confirm that cervical dilation is fully complete
The doctor or midwife must confirm that there is engagement of fetal vertex at station +2 or lower
Confirm that the membrane has ruptured
The doctor or midwife must confirm that fetal position is very compatible with operative vaginal delivery
Confirm that the maternal bladder is empty or else it must be drained
Confirm that the maternal pelvis is adequate. This is done by clinically assessing the pelvic dimensions (clinical pelvimetry) in order to ascertain that the pelvis is adequate
The doctor or midwife need obtain informed consent, have adequate support and personnel as well as adequate analgesia or anesthesia. Neonatal care providers or nurses must have been alerted so that they can be ready to manage any neonatal complications that may arise. Anything less of the above requirements is very risky for operative vaginal delivery [9].
Basically, the indications for forceps delivery and vacuum extraction are the same [8]. They are [10];
Prolonged second stage of labor, that is, from full cervical dilation to delivery of the fetus
Suspicion of fetal compromise, such as abnormal heart rate pattern
When there is need to shorten the second stage of labor for maternal benefit. This may be in the following circumstances; maternal cardiac dysfunction (such as left-to-right shunting), maternal exhaustion and neurologic disorders (such as spinal cord trauma). These conditions contraindicate pushing or prevent effective pushing.
Operative vaginal delivery is not permitted in some circumstances [8]. Contraindications include unengaged fetal head, unknown fetal position and certain fetal disorders such as hemophilia. In particular, Vacuum extraction is contraindicated in preterm pregnancies of less than 34 weeks of gestation. This is because risk of intra-ventricular hemorrhage is high [10].
Operative vaginal delivery poses some complications both to the mother and the baby [8]. The major complications are maternal injuries, fetal injuries and hemorrhage. These are common in particularly if the doctor or midwife is inexperienced or if the mother is not appropriately chosen. Significant maternal perineal trauma and neonatal bruising are more common with forceps delivery whereas shoulder dystocia, cephalohematoma, jaundice and retinal bleeding are more common with vacuum-assisted delivery.
Sometimes, vaginal deliveries may pose health risks for the mother, the baby or even both [4]. In such circumstances, medical experts recommend that pregnant women with the following conditions below must avoid spontaneous vaginal deliveries;
Mothers with complete placenta praevia. This is the situation where the baby’s placenta fully covers the mother’s cervix
Mothers with herpes virus having active lesions
Sometimes but not always, mothers with untreated HIV infection
Mothers who had more than one or two previous cesarean deliveries or uterine surgeries
In the circumstances above, the affected mothers are advised to deliver by Cesarean section. It’s the desired alternative for mothers who have any one or more of those conditions.
Vaginal delivery has the following benefits [5] to the mother and baby
Babies born vaginally tend to have fewer respiratory problems at birth and even afterwards.
The mother recovers much quicker than in other types of delivery, such as cesarean section.
Vaginal delivery has a lower rate of infection [11]. The baby will receive beneficial bacteria, that will help against infection [12]
A shorter hospital stay is realized in vaginal delivery than other delivery types, such as cesarean section. The recovery time is much faster in vaginal delivery [11]
The mother will be more likely to engage in early breastfeeding. A review of 53 international studies conducted in 2012 found that rates of early breastfeeding are lower after cesarean section than after vaginal delivery.
The mother will be less likely to have complications in future pregnancies
It is economically less costly compared to cesarean delivery. It has been argued that a Non-profit organization called ‘FAIR Health’ estimated that the average cesarean section in United States of America will cost about $16,907 whereas the average vaginal delivery costs nearly 30 percent less [11]
Vaginal deliveries have the following disadvantages [5];
It carries higher risk of perineal tearing of the perineum.
Sometimes, a vaginal delivery may not be recommended in some medical conditions (see section 4 above).
In order to manage normal vaginal delivery, many obstetric facilities tend to use a common labor suit, delivery, recovery and postpartum room. This is to allow the mother, the care giver and the neonate to remain in the same room throughout their stay. This makes it less costly. Some health facilities use a labor room and separate delivery suite, to which the mother is transferred when time for delivery comes. Usually, the mother’s partner or any other support person is allowed in to give accompany to the mother [13]. In the delivery room, the perineum is washed and draped, and the neonate is delivered. After delivery, the woman may remain there or be transferred to a postpartum unit or section.
In managing normal vaginal delivery, the doctor or midwife must be well acquainted with the steps in conducting vaginal delivery [14]. See diagrammatic illustrations 1 and 2 below (Figure 1).
Stages of vaginal delivery [
There are many options for anesthesia. These are; local, regional and general anesthesia.
Usually, local anesthetics and opioids are frequently used. These medicines pass through the placenta and thus, before delivery, the medicine/drugs should be given in small doses to avoid toxicity in the neonate. The common toxicities related to local anaesthia are; central nervous system [CNS] depression and bradycardia [13].
It must be noted that opioids used alone do not provide adequate analgesia and so are most often used with anesthetics.
In the option for anesthesia, most of the local anesthetic methods used include; pudendal blocks and perineal infiltration as well as para-cervical blocks.
There are several methods are available for regional anesthesia;
General anesthesia is not recommended for routine delivery because potent and volatile inhalation drugs, such as isoflurane, can cause marked depression in the fetus [16].
Again, on rarely basis, during vaginal delivery, 40% nitrous oxide, with oxygen can be used for analgesia. This is so, as long as verbal contact with the mother is well maintained [13].
However, for induction of general anesthesia during cesarean section, thiopental, which is a sedative-hypnotic, is predominantly given intravenously with other drugs such as; succinylcholine and nitrous oxide plus oxygen. When thiopental is used alone, it provides inadequate analgesia and yet with it (thiopental), induction is more rapid and recovery is very prompt. Thiopental may become concentrated in the fetal liver and this prevents its levels from becoming high in the central nervous system (CNS). High levels of thiopental in the CNS may cause neonatal depression, a situation very detrimental to desired outcome of delivery.
In other studies, analgesia is recommended [17]. In this case, an attempt with a peridural approach or with the use of short-acting narcotics is advisable. Platelet counts more than 50,000/mL for cesarean delivery, more than 20,000/mL for vaginal delivery, more than 75,000/mL for epidural anesthesia, and more than 50,000/mL for spinal anesthesia in that order are considered safe for delivery.
In order to fully comprehend the delivery of fetus, one needs to know the mechanism of labour well. It involves the passive movement the fetus must undertake in order to negotiate through the maternal bony pelvis. Thus labour can be broken down into the following respective stages; descent, engagement, neck flexion, internal rotation, crowning, extension of the presenting part, restitution, internal rotation and lateral flexion. The readers are advised to read other chapter of this book, where labour was discussed in details. Knowledge of pelvic anatomy and perimetry becomes vital, which is lacking in this section.
Thus, in delivering the fetus, a vaginal examination is done to ascertain the position and station of the fetal head. The head is usually the presenting part of the fetus, bu. rarely the buttock [13]. If effacement is completed and the cervix is fully dilated, then the mother is asked to bear down and strain with each uterine contraction. This helps to move the head through the pelvis and progressively dilate the vaginal introitus so that more and more of the head comes out. The moment about 3 or 4 cm of the head is visible during a uterine contraction in nulliparas, the following maneuvers can then facilitate vaginal delivery and reduce risk of perineal tear;
The doctor or midwife, if he or she is right-handed, will place the left palm over the fetus’s head during a contraction to control progress.
Concurrently, the doctor or midwife should place his or her curved fingers of the right hand against the dilating perineum, through which the infant’s brow or chin is felt. See Figure 2 below.
In order to advance the head, the doctor or midwife should wrap his or her hand in a towel and then with the curved fingers, he or she should apply pressure against the underside of the brow or chin. This maneuver is known as ‘modified Ritgen’s maneuver’ [13].
Steps in conducting vaginal delivery [
An
Possible sites of episiotomy in vaginal delivery [
It is common knowledge that active management of the 3rd stage of labour reduces the risk of postpartum hemorrhage. Active management of 3rd stage of labour includes giving the mother a uterotonic drug such as oxytocin as immediate as the fetus is delivered. This uterotonic drug helps the uterus to contract effectively and decrease bleeding due to uterine atony.
Oxytocin, given as 10 units intramuscularly or infusion of 20 units/1000 mL of normal saline at 125 mL/hour is considered. It should not be given as an intravenous bolus in order to minimize the risk of cardiac arrhythmia which might otherwise occur.
Upon successful delivery of the baby and administration of oxytocin, the doctor or midwife should gently and controllably pull the cord and place his or her hand gently on the mother’s abdomen over the uterine fundus. This is to detect contractions [13]. Note that separation of placenta from uterus usually occurs during the 1st or 2nd contraction. This often occurs with a gush of blood from behind the separating placenta. The mother can help to deliver the placenta by bearing down. In case the mother cannot bear down and substantial bleeding occurs, the placenta should be evacuated by the doctor or midwife placing his or her hand on the abdomen and then exerting firm downward (caudal) pressure on the uterus. This kind of procedure must be done only on condition that the uterus feels firm, otherwise, pressure on a flaccid uterus can cause the uterus to invert, thus worsening the problem.
For successful vaginal delivery, the doctor or midwife may have to consider the following things to be done;
Be prepared and knowledgeable on conduction of vaginal delivery and equipments to use
Seek consent for vaginal delivery from the mother or next-of-kin
Allay mother’s anxiety
Wait for labour to progress spontaneously, while monitoring contraction, cervical dilation, fetal descent and heart rate
Have your equipments, also called delivery set, placed at the proper position in labour suit near the mother
Have your oxytocics ready and placed in proper position
Put on your sterile gloves
Do attentive waiting on the mother
Conduct vaginal delivery as labour progresses and the baby comes out
Call for help where appropriate
Have neonatal resuscitation equipments ready and functional
In vaginal delivery, take care that you do not do the following;
Do not fail to seek consent for delivery from the mother or next-of-kin
Do not fear to call for help whenever it is required
Do not use unsterile equipments
Do not conduct vaginal delivery without ascertaining the availability of an assistant or a senior midwife or doctor
Do not conduct vaginal delivery without preparing your delivery set
Do not conduct vaginal delivery without ascertaining the availability of oxytocics
Do not conduct vaginal delivery without ascertaining availability neonatal resuscitation equipments
Do not conduct vaginal delivery without ascertaining the functionality of your neonatal resuscitation equipments
Do not be absent minded
For a full-term newborn, vaginal delivery means to deliver the baby at a gestational age of 37–42 weeks, from the mother’s first day of the last menstrual period. This is determined by an accurate history taking from the mother or by ultrasonographic dating and evaluation. Some mothers, for one reason or another, may not adequately know their first day of the last normal menstrual period. In this case, reliance on history from the mother may be misleading. However, the Naegel rule is the use of a commonly known formula to predict the expected date of devilery. The formula is based on the date of the first day of the last normal menstrual period of the mother. The rule assumes a menstrual cycle of 28 days and mid-cycle ovulation, at 14 day from the first normal menstrual day. This means that the formula may not be applicable for women whose cycles are either less than 28 days or more than 28 days and whose ovulation may occur before or after day 14. In this case, however, ultrasonographic dating can be much more accurate, especially if it is done in early pregnancy, before 12 weeks of gestation. Ultrasonographic dating done earlier than 12 weeks are more accurate than those done at 12 weeks or above 12 weeks. Again, this might depend on the level of experience and knowledge of the user of the ultrasound machine. Vaginally, approximately 11% of singleton pregnancies are delivered pre-term whereas 10% of all deliveries are post-term. Good knowledge of normal vaginal deliveries thus, forms the basis for management of complicated deliveries.
I do acknowledge the technical guidance of my colleagues in the Faculty of Health Sciences of Uganda Martyrs University. In a special way I appreciate Ms. Scovia Mbabazi, the former Associate Dean of the faculty.
The author declares no conflict of interest.
Open Access publishing helps remove barriers and allows everyone to access valuable information, but article and book processing charges also exclude talented authors and editors who can’t afford to pay. The goal of our Women in Science program is to charge zero APCs, so none of our authors or editors have to pay for publication.
",metaTitle:"What Does It Cost?",metaDescription:"Open Access publishing helps remove barriers and allows everyone to access valuable information, but article and book processing charges also exclude talented authors and editors who can’t afford to pay. The goal of our Women in Science program is to charge zero APCs, so none of our authors or editors have to pay for publication.",metaKeywords:null,canonicalURL:null,contentRaw:'[{"type":"htmlEditorComponent","content":"We are currently in the process of collecting sponsorship. If you have any ideas or would like to help sponsor this ambitious program, we’d love to hear from you. Contact us at info@intechopen.com.
\\n\\nAll of our IntechOpen sponsors are in good company! The research in past IntechOpen books and chapters have been funded by:
\\n\\nWe are currently in the process of collecting sponsorship. If you have any ideas or would like to help sponsor this ambitious program, we’d love to hear from you. Contact us at info@intechopen.com.
\n\nAll of our IntechOpen sponsors are in good company! The research in past IntechOpen books and chapters have been funded by:
\n\n