article_id
int64
6
10.2M
title
stringlengths
6
181
content
stringlengths
1.17k
62.1k
excerpt
stringlengths
7
938
categories
stringclasses
18 values
tags
stringlengths
2
806
author_name
stringclasses
605 values
publish_date
stringdate
2012-05-21 07:44:37
2025-07-11 00:01:12
publication_year
stringdate
2012-01-01 00:00:00
2025-01-01 00:00:00
word_count
int64
200
9.08k
keywords
stringlengths
38
944
extracted_tech_keywords
stringlengths
32
191
url
stringlengths
43
244
complexity_score
int64
1
4
technical_depth
int64
2
10
industry_relevance_score
int64
0
7
has_code_examples
bool
2 classes
has_tutorial_content
bool
2 classes
is_research_content
bool
2 classes
10,052,503
Google Wants To Improve AI’s Multi-Tasking
Which tasks should be trained together in multi-task neural networks? Google AI has a new method called Task Affinity Groupings (TAG) that will answer this. In a multi-task learning method, information learnt by one task can also benefit the training of other tasks. This method by Google aims to measure inter-task affinity by training all tasks together in a single multi-task network. Then it finds out the degree to which one task’s gradient update on the model’s parameters will affect the loss of the other tasks in the network. This quantity is averaged across training, and then the tasks are grouped together to maximise the affinity for each task. Multi-task training can improve training efficiency and model performance but requires careful training task selection. Today we present a new approach that improves task grouping selection by measuring how training on one task affects others in the group. https://t.co/2xu8xqeGiZ pic.twitter.com/da0rGP9fjg— Google AI (@GoogleAI) October 25, 2021 Why is Multi-task learning important? The research said multi-task learning helps in improving modelling performance. It does so by: Introducing an inductive bias to prefer hypothesis classes that explain multiple objectivesFocuses on relevant features When tasks compete for model capacity or are unable to build a shared representation that can generalise to all objectives, it may cause degraded performance. So, it becomes important to find groups of tasks that get benefitted from co-training. But, as per the research, experience and intuition drive a human being’s understanding of similarity. Also, co-training’s benefit or loss depends on other non-trivial decisions like dataset characteristics, model architecture, hyperparameters, capacity, convergence, etc. This makes it crucial to find a technique to which tasks should train together in a multi-task neural network. MAML as inspiration The researchers were inspired by meta-learning for this method. One such meta-learning algorithm, Model-Agnostic Meta-Learning (MAML), first applies a gradient update to the models’ parameters for a collection of tasks. After that, it updates its original set of parameters. This minimises the loss for a subset of tasks in that collection that was computed at the updated parameter values. MAML effectively trains the model to learn representations that will not minimise the loss for its current set of weights but for the weights after one or more training steps. What is TAG exactly? TAG follows a similar model to MAML. Here is what it does: Updates the model’s parameters with only one single task in focus Observes how this change would affect the other tasks in the multi-task neural networkUndoes the updateRepeats the process for every other task to collect information on how each task in the network would interact with any other taskUpdates the model’s shared parameters with respect to every task in the network This reveals that certain tasks consistently exhibit beneficial relationships and others are antagonistic towards each other. Then as per the research, “A network selection algorithm can leverage this data in order to group tasks together that maximise inter-task affinity, subject to a practitioner’s choice of how many multi-task networks can be used during inference.” Image: Google (Overview of TAG. First, tasks are trained together in the same network while computing inter-task affinities. Second, the network selection algorithm finds task groupings that maximise inter-task affinity. Third, the resulting multi-task networks are trained and deployed) What did Google find out? The researchers found out that TAG can select very strong task groupings. It worked on the CelebA and Taskonomy datasets. TAG showed it was competitive with the prior state-of-the-art while operating between 32x and 11.5x faster, respectively. The speedup on the Taskonomy dataset translated to 2,008 fewer Tesla V100 GPU hours to find task groupings. The research shows that the empirical findings indicate the approach taken is highly competitive. It added that this outperforms multi-task training augmentations like Uncertainty Weights, GradNorm, and PCGrad. It performs competitively with grouping methods like HOA while showing improvement in computational efficiency by over an order of magnitude. The research showed that inter-task affinity scores can find close to optimal auxiliary tasks and implicitly measure generalisation capability among tasks. Challenges Though identifying task groupings in multi-task learning can have a big impact in saving time and computational resources, there are risks too with this. Inter-task affinities can be interpreted as “task similarity” by mistake. This can create an association and/or causation relationship among tasks with high mutual inter-task affinity scores. It can be a problem for datasets that contain sensitive prediction quantities related to race, gender, religion, age, status, physical traits, etc. Inter-task affinities, by mistake, can be used to support an unfounded conclusion that attempts to posit similarity among tasks. But acknowledging the risks is a good move. It reduces the chances of abuse.
Task Affinity Groupings method shows which tasks should be trained together in multi-task neural networks
["Global Tech"]
["Neural Networks"]
Sreejani Bhattacharyya
2021-10-28T14:00:00
2021
778
["Go", "meta-learning", "AI", "neural network", "RPA", "ML", "RAG", "Aim", "V100", "R", "Neural Networks"]
["AI", "ML", "neural network", "Aim", "RAG", "R", "Go", "meta-learning", "RPA", "V100"]
https://analyticsindiamag.com/global-tech/google-wants-to-improve-ais-multi-tasking/
3
10
0
false
false
false
22,205
Google’s New Online Course Will Teach You AI And Machine Learning Concepts For Free
Machine learning and artificial intelligence are the most trending topic in the tech world today, with both skeptics and advocates dominating the headlines. Not a day passes without advancement and progress in the artificial intelligence sector, which will soon become mainstream. Now, Google wants to widely open up this technology and make it more accessible to anyone who is interested in machine learning, with its free online course. Google on Wednesday launched a new website called Learn with Google AI. This educational website is meant to be a information hub for anyone who wants to ‘learn about core machine learning concepts, develop and hone your machine learning skills, and apply ML to real-world problems’. The new website aims to cater everyone starting from students, to curious cats to advanced researchers. “AI can solve complex problems and has the potential to transform entire industries, which means it’s crucial that AI reflect a diverse range of human perspectives and needs. That’s why part of Google AI’s mission is to help anyone interested in machine learning succeed—from researchers, to developers and companies, to students.” Google’s Zuri Kemp said. The California-based tech giant has repeatedly stated its goal to democratise artificial intelligence and make its tools available for everyone. Google’s Learn with Google AI website also features a free course called Machine Learning crash course (MLCC) with TensorFlow APIs. Google originally designed this course for its employees as a part of a two-day boot camp aimed to give practical introduction to machine learning fundamentals. More than 18,000 employees have already enrolled in MLCC, to enhance camera calibration for Daydream devices, build VR for Google Earth, and improve streaming quality at YouTube. Now, Google is making MLCC available to everyone.The 15 hours online course includes real-world case studies, interactive visualisation, video lectures, 40+ exercise to help teach machine learning concepts. “MLCC’s success at Google inspired us to make it available to everyone,” Kemp said.
Machine learning and artificial intelligence are the most trending topic in the tech world today, with both skeptics and advocates dominating the headlines. Not a day passes without advancement and progress in the artificial intelligence sector, which will soon become mainstream. Now, Google wants to widely open up this technology and make it more accessible […]
["AI News"]
["AI (Artificial Intelligence)", "Google", "Machine Learning", "Tensorflow"]
Smita Sinha
2018-03-01T08:39:00
2018
319
["Go", "API", "artificial intelligence", "machine learning", "AI", "ML", "Machine Learning", "Aim", "ai_frameworks:TensorFlow", "Google", "TensorFlow", "R", "AI (Artificial Intelligence)", "Tensorflow"]
["AI", "artificial intelligence", "machine learning", "ML", "Aim", "TensorFlow", "R", "Go", "API", "ai_frameworks:TensorFlow"]
https://analyticsindiamag.com/ai-news-updates/googles-new-online-course-will-teach-ai-machine-learning-concepts-free/
2
10
0
false
false
false
41,177
5 Core SQL Concepts You Should Master
Structured Query Language (SQL) plays an important part in the data management system in an organisation. While applying for a data analyst job, most organisations ask for hands-on experience with SQL. SQL is a simple yet powerful language that is used widely as a business intelligence tool. In this article, we list down 5 important steps one must know to master SQL for data science. 1| Basic of Relational Database And SQL A database is a set of structured data which can be easily accessible. A relational database is a collection of data that contains the pre-defined relationship between them in the form of tables with rows and columns. Some of the key terms which are used thoroughly in the relational database are tables, records, primary keys, attributes, and foreign keys. The tables are sometimes called a relation which contains one or more than one category of data, the attributes are also known as columns, a record is also known as a tuple or a row. The primary key is contained in each table. It is unique and used to identify the information in a table. The foreign keys are used to link the primary keys of another table. Structured Query Language (SQL) is a powerful database tool that is used to perform operations such as create, maintain and retrieve data stored in the relational database. It is basically a standard language for data manipulation in a Database Management System (DBMS). 2| Understanding  the SQL Commands Data Definition Language (DDL): The DDL commands such as create, drop, alter and truncate is used for creating, dropping, altering and modifying the structure of database objects. Data Manipulation Language (DML): The DML commands such as insert, update and delete are used for inserting, updating and deleting the structure of database objects. Data Control Language (DCL): The DCL commands such as grant and revoke are used for providing security to database objects. Data Query Language (DQL): The DQL command such as select is used for retrieving data from the database. Transaction Control Language (TCL): TCL commands such as commit, rollback and savepoint is used for managing transactions in the database. 3| Knowledge of Joins The SQL Joins are basically used for combining records from two or more tables in a database. The different types of Joins are INNER Join: This join selects all the records with matching values in both the tables.FULL Join: This join selects all the records either from the right table or left.LEFT Join: This join selects records left-most table along with the matching records from the right table.RIGHT Join: This join selects records from the right-most table along with the matching records from the left table. 4| Interface SQL With Python Or R If a programmer knows statistical language such as Python or R, s/he can easily run the packages of both the languages to build machine learning models on a large dataset in a SQL server. Knowledge of these statistical languages along with the understanding of SQL will surely help a programmer move up the career ladder. With Python or R in SQL Server, one can perform data analysis, prepare datasets, create interactive visualisations of data, etc. 5| Advanced SQL Once you gain insights on the basics of SQL and understand them clearly, it is time to learn a deeper concept which is Advanced SQL. In this part, you will learn about various other keywords and concepts such as UNION, UNION ALL, INTERSECT, MINUS, LIMIT, TOP, CASE, DECODE, AUTO-INCREMENT, IDENTITY, etc. in order to create advanced reports and perform complex pattern matching.
Structured Query Language (SQL) plays an important part in the data management system in an organisation. While applying for a data analyst job, most organisations ask for hands-on experience with SQL. SQL is a simple yet powerful language that is used widely as a business intelligence tool. In this article, we list down 5 important […]
["AI Trends"]
["SQL"]
Ambika Choudhury
2019-06-24T06:09:04
2019
593
["business intelligence", "data science", "Go", "machine learning", "AI", "ML", "Python", "SQL", "GAN", "R"]
["AI", "machine learning", "ML", "data science", "Python", "R", "SQL", "Go", "GAN", "business intelligence"]
https://analyticsindiamag.com/ai-trends/5-core-sql-concepts-you-should-master/
2
10
1
false
false
false
23,654
How Neural Networks Are Being Used For Skin Cancer Classification
Cancer is one of the most dreaded diseases in the world and in spite of the fact that medical advancements are happening at a rapid pace, this evil apparition of a disease continues to affect and take many lives. This article explores the application of neural networks in particularly skin cancer classification. We would discuss the methods of classification of images for detecting skin cancer with neural networks which can help oncologists and dermatologists diagnose the disease faster, with greater accuracy. Forms Of Skin Cancer Skin cancer is mainly caused by sun’s rays inflicting the human skin, causing abnormal growth in skin cells. Although the disease is more prevalent in Western countries, nevertheless the possibilities of skin cancer occuring should not be ignored by people of all skin colors. The following are the forms of skin cancer that arise from the early stages that gradually lead to the last stage. Actinic Keratosis: Precancerous growths on the skin that are usually pale, dry spots Basal cell carcinoma: The skin develops pinkish blemishes. It originates in the epidermis layer of skin. Squamous Cell Carcinoma: Red bumps or patches are developed on the skin. If left untreated, it can lead to permanent skin damage. Melanoma: It develops as a small dark mole which left undiagnosed can cause death by spreading to other organs. It originates in skin pigment-producing cells called melanocytes. The first three forms will sometime lead to melanoma, and should be mitigated at the early onset itself. Neural Networks For Skin Cancer Classification Since melanoma is the most dangerous among the various forms of skin cancers, Neural networks (NN) are usually centered around and developed for classification of this specific skin cancer type. However, NN are trained to classify other forms too. In one study by Andre Esteva at Stanford University, NN were trained to classify approximately 2,000 skin cancer variations with more than 1,00,000 images. The sample images which were chosen for classifications are shown below Image Credits : Andre Esteva – Stanford University The images are classified along two dimensions of tumor growth, benign and malignant, against three forms of skin cancer development called epidermal lesions, melanocytic lesions and dermoscopically observed melanocytic lesions. This is to avoid misclassification by NN and make diagnosis of skin diseases easier. The type of network architecture used is Inception-V3, which is a deep Convolutional Neural Network (CNN). This NN can make an error of only upto 20 percent, which makes it helpful for image validation to eliminate misclassification to a greater extent. Once the NN is introduced in the process, the training of these image data is segregated into 750 classes, as well as into inference classes along the two dimensions (benign/malignant probability) by using a partitioning algorithm. Another dimension called Non-Neoplastic is also introduced to the inference class considering the possibility of benign growths turning malignant. Validation And Working Once the entire NN classification structure is established for skin cancer classification, it is set for validation. This means the set of input images are checked whether they conform to the skin classification scenario or not. The validation can be done according to the dimensions or classes mentioned earlier. Test sets can be images collected for classification and the number can vary with regard to images collected in the context. In the study, the accuracy obtained in validation is close to that of the dermatologist itself (almost 90 percent accurate). T-distributed stochastic neighbor embedding (t-SNE) is the ML algorithm used for partitioning. The classification basis are visualised below in the picture. Image Credits : Andre Esteva – Stanford University Although, the validation is done to maintain precision in classification, it will lead to instances where the NN and algorithm might misclassify benign growth as malignant, vice-versa or even give a completely different output altogether. The percentage stands low — at 10 percent — but there is room for improvement to achieve 100 percent accuracy. Conclusion There are no regions or organs in the human body that cannot be afflicted by cancer, and that’s why it is important to keep a check on its progression. The earlier the cancer is diagnosed, the easier it is to prevent it from growing and spreading to other parts of the body. This can also lead to curing the threat altogether. This is where machine learning aids oncologists and dermatologists with accuracy and speed.
Cancer is one of the most dreaded diseases in the world and in spite of the fact that medical advancements are happening at a rapid pace, this evil apparition of a disease continues to affect and take many lives. This article explores the application of neural networks in particularly skin cancer classification. We would discuss […]
["IT Services"]
[]
Abhishek Sharma
2018-04-13T12:29:59
2018
726
["Go", "API", "machine learning", "TPU", "AI", "neural network", "ML", "Ray", "GAN", "R"]
["AI", "machine learning", "ML", "neural network", "Ray", "TPU", "R", "Go", "API", "GAN"]
https://analyticsindiamag.com/it-services/how-neural-networks-are-being-used-for-skin-cancer-classification/
3
10
0
true
false
false
26,935
Ashim Roy Of CardioTrack Explains Why ‘AI Made In India’ Is A Need Of The Hour
America’s excellence in tech developments is already famous. It is one of the most adept countries when it comes to technology, so much so that various other countries rely on their exceptional resources. For example, China’s dependency for the past two decades on chips from the US companies has resulted in a situation where ZTE, one of the leading Chinese telecom company, is at a brink of collapse because of the US government’s trade embargo. It could be said that over-reliance on American technology crippled the functionality of this company. Fearing similar consequences for India, Ashim Roy of CardioTrack believes that India should thoroughly use indigenous AI solutions, as that is the best way to protect the interests of the nation. In technologies involving AI, Indian companies and entrepreneurs are largely dependent on technologies from the US tech giants such as IBM, Amazon and Google. Although these frameworks provide the fastest way to launch a product in the market, Roy believes that Indian companies and government is leaving themselves and their customers at the whims of the US trade or foreign policy. In areas such as healthcare and cybersecurity, it is vitally important to develop indigenous solutions such that no foreign country can jeopardise lives and the wellness of the citizens of this country. In his efforts towards the indigenous AI, Roy aims to start a campaign to create awareness around the same. Analytics India Magazine caught up with Roy to understand his views on the need for implementing indigenous technology, about his initiative, how he plans to implement it, and more. Analytics India Magazine: When it comes to AI, Indian companies and entrepreneurs are largely dependent on technologies from US tech giants. Do think it leaves Indian companies and government vulnerable to vagaries of the US trade or foreign policies? Ashim Roy: This is a very significant problem on multiple levels. Artificial intelligence, machine learning or deep learning applications are being used in many aspects of day-to-day activities and has helped in automating various process with a very high level of accuracy. US has become a dominant player in AI because of its significant investments in research and development over the past two decades. Some of the other players are China, Israel and several European countries. Despite these developments, a large number of Indian and global AI applications are still being built on third-party AI engines and most of these AI engines are developed in the US. For instance, in India, many AI and ML applications are being developed on IBM Watson, which could be a significant problem based on the recent situation at ZTE. The Chinese telecom equipment giant, ZTE, is the latest victim of the recent trade embargo imposed by the US Department of Commerce. This embargo has stopped the shipment of components from US companies such as Qualcomm and others to ZTE and has led to a factory shutdown leaving 75,000 workers without jobs. If US were to impose similar restrictions on IBM to license Watson, many Indian companies offering AI applications built on top of Watson will be in peril. Moreover, if these AI applications are in critical areas of healthcare and cybersecurity, lives of many Indians will be endangered. It is, therefore, essential not to become completely dependent on partners from countries like the US or China or Europe in areas that are critical to the safety, security and well-being of the citizens of India. AIM: How can India’s dependency on AWS/Google cloud engine raise security issues? AR: Many problems can arise when the AI engine from a foreign partner is used. For instance, while training the AI engine, large volumes of data must be shared with the AI engine. Personal data security is on everyone’s mind since the Facebook’s misadventure with personal data, and there are many questions to be answered, such as: Has the AI application developer taken precautions to protect user identity? Has the AI application developer taken proper authorization to use and share the user data? Has the AI application developer taken proper measures to keep the data safe from the hands of hackers and terrorists? Is the AI application developer complaint to General Data Protection Regulations (GDPR), which came in to effect on 25th May 2018? Does the AI engine take care of all these data security issues? And others. In general, most of the AI engines and framework companies will not provide answers to most of these questions in a way to give confidence to regulators and policymakers that personal data is being kept safe. Last two decades of playing loose with personal data at all levels – Governments, Businesses and Consumers – has led to the very serious concern about misuse of personal data and AI engines “see” a lot of data. The Indian government has been developing its own data privacy laws. These are designed to protect consumers in case of a data breach. The regulatory framework specifies what data elements need to be protected. The AI application developer needs to ensure that the overall solution that includes the application, AI engine and all of the elements of the solution that have access to personal data adhere to these requirements. A tall order for an upstart AI entrepreneur. No wonder, one of the technology visionaries of our times, Elon Musk, thinks of AI doomsday. AIM: How important is it to develop indigenous AI solutions when it comes to sensitive areas like healthcare? AR: Future of India’s safety and security depends largely on its ability to develop indigenous technologies for areas such as healthcare, cybersecurity and data privacy. Being dependent on foreign technology for mobile phones is acceptable because if the US stops Intel and Qualcomm from selling chips and stops Google from licensing Android to Micromax, consumers in India still can buy phones from Samsung or HTC. However, if IBM is banned from licensing Watson to Indian healthcare solution providers, the situation can have disastrous consequences for the well-being of the population. This can seriously impact the success of Modicare because without AI interpretation there can be no intervention. One might find the idea that IBM, Google, Intel and Qualcomm not selling their products in India as preposterous. Think again. The only reason the US did not put a similar embargo on Huawei was because it wanted to avoid an all-out trade war with China, since Huawei is a much bigger company with deep-rooted political connections. The scenarios of a trade embargo are not far-fetched. AIM: Indigenous AI solutions based on global research is the need of the hour. But how well-equipped are Indian companies and research institutes to achieve it? For instance, Indian IT companies may not have the infrastructure/platforms like Microsoft, AWS or Google? How can one remedy this situation? AR: There is a huge amount of information on AI research and scholarly publications about it, in the public domain. Two of India’s IT majors – Infosys and Wipro have developed their own AI platforms. These activities are to gain knowledge, support various IT services and customer requirements. The reason many of the AI platform developers from US made the AI platform available to the global AI application development community at a low cost or no cost is the access to the data that the platform companies do not have. Here’s how this works – Application developers using the AI platform need to constantly feed the AI engine with data to train the AI engine. The platform owner gets access to a steady supply of valuable data without having to pay for it. Which makes this a really sweet deal. Not surprisingly US IT majors are aggressively staking their positions in this wild west AI terrain. In India, Wipro HOLMES and Infosys Nia are two notable entries when it comes to AI platforms. However, these are proprietary platforms. It would have been great if Infosys and Wipro had made them open access platforms to help develop the AI community in India. This situation can be remedied however. All that is required is a sound AI policy framework, aggressive investment, political will and collaborative implementation. Sounds simple. Right? AIM: What are the steps that Indian companies, government and policymakers can take to overcome this issue? AR: Some of these steps are: AI policy framework: Policy framework that encourages development of a complete AI platform. This will also ensure that foreign players abide by the rules to develop AI knowledge-base in India, ensure that platform infrastructure is implemented within national boundaries and in case of an adverse geopolitical situation, access to the platform cannot be denied. Data Protection Regulations: All data sourced in India remains in India. Data is protected to ensure citizen’s privacy. Unless explicitly directed by the owner of the data, it cannot be shared with anyone else. Owner of the data cannot be denied access to their data under any circumstances. Collaboration: Collaboration is the key to rapid progress on this front. Unless there are funds available from the government, neither academia nor the industry are likely move forward quickly and actively engage in collaboration. The current AI activities are akin to a set of silos that do not benefit from each other. This is India – Show me the money! Investment: Government should lead the way by providing grants, debt and equity investment. Government must ensure that these funds can be easily accessed by entrepreneurs and scholars and used for the purpose it was allocated and ensure that it happens smoothly it has to eliminate red-tape and corruption. The current process of accessing government grants is archaic, inefficient and lacks transparency. Funds allocated to promote entrepreneurship in India in 2014 still has not found its way to entrepreneurs and startups. Monitoring: Measuring progress against the policy framework and expectation of a set of outcomes is a MUST. If progress lacks, then implementation must be changed. The age-old saying that what is not measured or monitored does not work applies here as well. Regulatory Enforcement: Without regulatory enforcement, the initiative will fail. When it comes to companies like Facebook, it is only through regulatory enforcement one can hope that the data and privacy of the Indian citizen be protected. AIM: What are the challenges on the way when it comes to having this in place? Do we have the talent to sustain the momentum or build cloud infrastructure? AR: One thing India does not lack is talent. However, talent alone will not solve the problem. What we lack is leadership. We also lack of strategic and policy level thinking about issues related to new technologies such as AI, ability to envision impact of AI on diverse areas from agriculture, waste management, transportation and politics. However, it is not fair to put all the blame on the government. If forums can be created where strategic thinkers from the industry can exchange their viewpoints and create awareness for policymakers (It does happen in certain industries), which leads to policy level initiatives in a reasonably short span of time (under a year), only then we can expect rapid progress. While India may not be a rich country; however, building cloud infrastructure, use of high-end CPUs and GPUs and creating a flourishing AI development environment is well within our capability. It is essential that government funding be available to support private players and entrepreneurs to offer such solutions to the AI research and development community within India. These types of easily accessible infrastructure will promote R&D in AI. AIM: Please tell us about the campaign that you have started to create awareness about the policy issues Indian companies might face when it comes to technologies like AI? AR: I would not say that I have started a “campaign” to create awareness about the need of indigenous AI development. However, I feel very strongly about the need for action from policymakers, industry and academia. Whenever, I get an opportunity speak about AI, I do bring up these issues about the need for policy level discussions and other emerging technologies that are shaping the world around us. Perhaps the publication of this interview will help start the “campaign”. And, I am absolutely committed to help in this process. I do hope that policy hacks and business leaders and academia come together and join hands to develop these ideas and implement solutions that will protect the interests of the people of India and secure the country from failing trade talks and the whims of developed countries. AIM: This campaign would also mean serious re-work and re-evaluation of government policies? How far do you think it is possible in the current context? AR: We need to actively engage with various government agencies and convince political leaders about the urgency of the need. And, the list can go on and on. However, this will require funding to make the representation, develop the ideas further, collaborate with industry and academia, learn from global policy and technology initiatives and create an environment where everyone can see the outcome of these activities. If the initiative is for protecting the interests of the people, then it is essential that they become aware of the initiative and support it. AIM: What is your action plan for this? Can you highlight how this will affect India’s talent and IT ecosystem? AR: Some of the ideas about the action plan have been outlined above. I cannot say that I have the knowledge or expertise to create a complete action plan. However, I will be happy to work closely with other experts to create such a plan. There is a need for many qualified minds to come together for a cohesive AI policy. This is good news indeed for the Indian IT ecosystem because there will be a need for high caliber talent to make this happen. And most importantly, it will deliver “AI Made in India” from start to finish.
America’s excellence in tech developments is already famous. It is one of the most adept countries when it comes to technology, so much so that various other countries rely on their exceptional resources. For example, China’s dependency for the past two decades on chips from the US companies has resulted in a situation where ZTE, […]
["AI Features"]
["AI India", "Interviews and Discussions", "which countries have good cybersecurity"]
Srishti Deoras
2018-08-06T12:13:28
2018
2,311
["artificial intelligence", "machine learning", "AWS", "AI", "ML", "which countries have good cybersecurity", "RAG", "Aim", "deep learning", "analytics", "AI India", "R", "Interviews and Discussions"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "analytics", "Aim", "RAG", "AWS", "R"]
https://analyticsindiamag.com/ai-features/ashim-roy-of-cardiotrack-explains-why-ai-made-in-india-is-a-need-of-the-hour/
2
10
4
true
true
false
43,176
How An AI Code Autocompleter Works?
An average smartphone OS contains more than 10 million lines of code. A million lines of code takes 18000 pages to print which is equal to Tolstoy’s War and Peace put together, 14 times! Though the number of lines of code is not a direct measure of the quality of a developer, it indicates the quantity which has been generated over the years. There is always a simpler, shorter version of the code and also a longer more exhaustive version. What if there is a tool which uses machine learning algorithms to pick out the most suitable code and prompt with a drop down menu? There is one now—-Deep TabNine. The developers behind TabNine have introduced Deep TabNine, which is created as a language-agnostic autocompleter. The core idea here is to index the code and detect statistical patterns to make better suggestions while writing code. This brings additional gains in responsiveness, reliability, and ease of configuration because TabNine doesn’t need to compile the code. GPT-2 Powered TabNine The above picture shows how typing ‘ex’, makes the IDE  to prompt for related options. TabNine is an autocompleter that helps the developers write code faster. To improve the suggestion quality, the team behind TabNine added a deep learning model. Deep TabNine is based on GPT-2, which uses the Transformer network architecture. GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. It adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing. Semantic completion is provided by external software which TabNine communicates with using the Language Server Protocol. TabNine comes with default install scripts for several common language servers, and this is fully configurable, so one can use a different language server or add semantic completion for a new language. Deep TabNine uses  subtle clues that are difficult for traditional tools to access. For example, the return type of app.get_user() is assumed to be an object with setter methods, while the return type of app.get_users() is assumed to be a list Autocompletion with deep learning https://t.co/WenacHVj7z very cool! I tried related ideas a long while ago in days of char-rnn but it wasn't very useful at the time. With new toys (GPT-2) and more focus this may start to work quite well. pic.twitter.com/XSV9O7yxpf — Andrej Karpathy (@karpathy) July 18, 2019 Although modeling code and modeling natural language might appear to be unrelated tasks, modeling code requires understanding English in some unexpected ways. Developers, instead of worrying about missing a trivial syntax or defining a class for a task specific functionality, can now proceed with their work at a higher level with Deep TabNine powered by OpenAI’s GPT-2. Deep TabNine requires a lot of computing power and running the model on a laptop would have latency. To address this challenge,  the team is now offering a service that will allow developers to useTabNine’s servers for GPU-accelerated autocompletion. It’s called TabNine Cloud. Why Should One Opt For TabNine? TabNine works for all programming languages. TabNine does not require any configuration in order to work. TabNine does not require any external software (though it can integrate with it). Since TabNine does not parse the code, it will never stop working because of a mismatched bracket. If the language server is slow, TabNine will provide its own results while querying the language server in the background. TabNine typically returns its results in 20 milliseconds. Supported languages: Deep TabNine supports Python, JavaScript, Java, C++, C, PHP, Go, C#, Ruby, Objective-C, Rust, Swift, TypeScript, Haskell, OCaml, Scala, Kotlin, Perl, SQL, HTML, CSS, and Bash. Get hands on with Deep TabNine here.
An average smartphone OS contains more than 10 million lines of code. A million lines of code takes 18000 pages to print which is equal to Tolstoy’s War and Peace put together, 14 times! Though the number of lines of code is not a direct measure of the quality of a developer, it indicates the […]
["Deep Tech"]
["AI coding", "Deep Learning", "GPT2"]
Ram Sagar
2019-07-23T17:00:04
2019
655
["AI coding", "machine learning", "Deep Learning", "OpenAI", "AI", "ML", "RAG", "Python", "deep learning", "SQL", "JavaScript", "GPT2", "R"]
["AI", "machine learning", "ML", "deep learning", "OpenAI", "RAG", "Python", "R", "SQL", "JavaScript"]
https://analyticsindiamag.com/deep-tech/code-autocomplete-tab-nine-deep-learning/
4
10
0
true
true
false
10,099,934
Now, Build Software Engineering Teams Using AI within Minutes
Yes, it’s possible. With the rise of AI agents that use LLMs to autonomously run tasks, the next step of evolution involves the integration of multiple agents that work together to accomplish tasks. With MetaGPT already serving the same purpose, it looks like more such agents are coming to the forefront — the recent one being ChatDev, a virtual chat-powered company that aids software development. The question is, what uniqueness does this agent bring to the table? Communicative Agents A team of 12 researchers from Dalian University of Technology, Beijing University and Brown University have built a multi-agent team ChatDev that will help build a software within minutes. ChatDev follows a structured approach similar to the waterfall model, a linear, sequential approach for software development. It breaks down the development process into four clear phases: design, coding, testing, and documentation. Each phase involves a team of agents, including programmers, code reviewers, and test engineers, promoting teamwork and ensuring a smooth workflow. Representation of ChatDev Functioning. Source: ChatDev On receiving an assignment/task such as creating ‘a gomoku game’ as explained in the paper, the ChatDev agents actively engage in effective communication and mutual verification through collaborative chatting. This process enables them to automatically craft comprehensive software solutions that encompass source codes, environment dependencies, and user manuals. A chat chain serves as a mediator, dividing each stage into smaller, individual tasks. This dual role allows for the suggestion and confirmation of solutions through context-aware communication, ultimately leading to the effective completion of specific subtasks. What about MetaGPT? When MetaGPT was introduced, the multi-agent framework was trending on GitHub with 20,000 stars. Similar to ChatDev, MetaGPT connects different AI agents that have been assigned various roles such as product managers, architects, project managers, and engineers, to function together. Though similar in implementing multiple agents, the purpose and approach taken by both are different. Development vs Solution-based ChatDev, a chat-powered company, is specifically focused on software development, whereas, MetaGPT is designed to enhance the capabilities of existing multi-agent systems that will specifically address the limitation in solving complex tasks. MetaGPT achieves it by encoding Standardised Operating Procedures (SOPs) into prompts to improve structured coordination among agents. It also mandates modular outputs, empowering agents with domain expertise to validate results and reduce errors. Instead of relying solely on the language model’s inherent knowledge, specific guidelines and procedures are provided to guide the agents in their interactions. ChatDev, on the other hand, follows a waterfall method, a project management and development methodology, dividing the work into multiple stages such as designing, coding, etc, which is particularly catered for software development. It uses a chat chain to facilitate communication and task breakdown. Large Language Model ChatDev has been experimented on the gpt3.5-turbo-16k version of ChatGPT. On the other hand, MetaGPT employs GPT4- 32k, and is said to have surpassed GPT-4 in percentage of pass rates on MBPP and HumanEval. ChatDev has not been compared with other LLMs. Costing ChatDev paper mentions the astounding efficacy in software generation. It claims that the entire software development process took under seven minutes at a cost of less than $1. For a project using the MetaGPT framework, it takes 516 seconds on an average, and $1.12, with a maximum cost of $1.35. Minimising Hallucinations Creating software systems directly with LLMs can also produce code-related hallucinations. These issues might manifest as incomplete implementations, absent dependencies, and undetected bugs. Such hallucinations can arise due to task vagueness and a lack of cross-checking in the decision-making process. However, this is largely addressed in ChatDev by introducing thought instruction mechanisms into each autonomous chat process during code completion, reviewing and testing stage. By performing a ‘role flip’, an instructor injects specific thoughts for code modifications into instructions. MetaGPT framework incorporates efficient human workflows as a meta programming approach into a LLM-based multi-agent collaboration, and looks to address hallucinations through it. However, no further details on how it will achieve it is given in the paper. With ChatDev, the process of multiple teams and people to accomplish various tasks can be eliminated. Building an entire software within minutes is no easy feat, and ChatDev effortlessly accomplishes it saving time, cost and resources. If put to use, ChatDev-type models can probably revolutionise software development workflow.
Through autonomous multi-agent interactions, ChatDev can build an entire software system. But, how is it different from MetaGPT?
["AI Features"]
["coding", "GitHub", "Software"]
Vandana Nair
2023-09-13T12:08:11
2023
708
["Go", "ChatGPT", "TPU", "AI", "coding", "MetaGPT", "Git", "RAG", "Software", "Aim", "multi-agent systems", "GitHub", "R"]
["AI", "ChatGPT", "MetaGPT", "multi-agent systems", "Aim", "RAG", "TPU", "R", "Go", "Git"]
https://analyticsindiamag.com/ai-features/now-build-software-engineering-teams-using-ai-in-minutes/
3
10
0
true
true
true
45,863
How Immigration Laws & Stringent Visa Regulations Are Hampering AI Research
In a world where there is a shortage of skills in emerging technologies like artificial intelligence, fostering innovation through conference and research meet-ups is important. Global gatherings of AI/ML researchers and practitioners around the world illustrate the power of collaboration and cross-pollination of ideas. For example, at the recent World Artificial Intelligence Conference in China, more than half of the participants were from outside the country. But, there have been instances where researchers were denied visas when they were invited for AI conferences abroad. At the 2018 NeurIPS Conference in Montreal, Canada, many Asian, Eastern European, and African invitees were unable to attend due to denied or delayed visa approvals. Invitees from Africa, in particular, were delayed or denied entry into the country at over 50% rate due to alleged security concerns. Now, Partnership on AI, a non-profit group with 90 member organisations including major universities, large technology companies like Amazon and Baidu, and organisations such as the American Psychological Association and the American Civil Liberties Union, has appealed to world governments to make it an easy process for visa approvals for AI experts. According to PAI, visa laws, policies, and practices are challenging the ability of many communities, including artificial intelligence and machine learning (AI/ML) community, to incorporate diverse voices in their work. “Due to the emergent and rapidly evolving nature of AI technology, AI, in particular, engenders high impact AI safety and security risks, which can be mitigated by increasing the diversity of participants. Countries lose out on valuable insights from individuals around the globe when officials are Visa Laws, Policies, and Practices required to make decisions based solely on the applicant’s nationality and without specific information to justify a denial,” said Partnership on AI. Overview Experts have made arguments governments should eliminate nationality-based barriers in evaluating visa and permanent residency applications for researchers. Security-based denials of applications should not be nationality-based, but rather should be founded on specific and credible security and public safety threats, evidence of visa fraud, or indications of human trafficking. It is therefore important for governments to pass laws that establish special categories of visas or permits for AI/ML research. This would ensure a seamless flow of diverse ideas which is very critical for the challenges we are facing when it comes to AI research, applications, privacy and regulatory frameworks.
In a world where there is a shortage of skills in emerging technologies like artificial intelligence, fostering innovation through conference and research meet-ups is important. Global gatherings of AI/ML researchers and practitioners around the world illustrate the power of collaboration and cross-pollination of ideas. For example, at the recent World Artificial Intelligence Conference in China, […]
["AI Features"]
["AI (Artificial Intelligence)", "H-1B visa", "Visa"]
Vishal Chawla
2019-09-12T12:10:59
2019
387
["Go", "API", "artificial intelligence", "machine learning", "AWS", "AI", "ML", "Visa", "BERT", "H-1B visa", "GAN", "R", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "ML", "AWS", "R", "Go", "API", "BERT", "GAN"]
https://analyticsindiamag.com/ai-features/immigration-visa-ai-research/
2
10
0
false
true
false
37,412
How Neural Networks Are Helping Designers Find The Right Stock Image
Image recognition is used in numerous applications across verticals. And one of the most prominent users for them are stock websites. Shutterstock is the most prominent example of this, that adopted artificial intelligence in their operations late last year. Their new feature known as Shutterstock Showcase heavily uses AI to perform a variety of tasks that function on the basis of image analysis. To understand how image recognition can be applied to this field, we must first look into how image recognition on the website functions. A Unique Solution For A Unique Problem At its heart, image recognition is applying metadata to data that does not have a structure. While many methods have been tried in the past, convolutional neural networks have emerged as a top player in this space. CNNs correlate proximity with similarity, which is a natural fit for image recognition. By further filtering CNN connections by proximity, they provide an efficient and accurate result. This is what Shutterstock aimed to achieve. Shutterstock is one of the oldest stock image sites on the Internet today, with over 13 years of collected photos. Reportedly, the team went through multiple models before being satisfied with one, testing them over a period of months. This was due to the vast variety of content on the website, but also due to a more arbitrary reason. As always, humans remained the weakest link, with search queries tending towards abstract concepts or ideas. Moreover, a lot of designers used the service, leading to a need for images with enough white space to insert text or logos. The inference step was performed on all images in the site’s database, which also included the newer images which were being added on the platform. This was used to create a fingerprint of images on the site that captured their structure. This was then followed by grouping the fingerprints together and running them through a process called the ‘nearest neighbour search index’. This allowed the researchers to find the closest examples in the library of stock photos. The Model To Redefine Image Search Shutterstock has also long been at the forefront of using computer vision technology to provide better search results. This was the culmination of their efforts to make a catch-all image search algorithm. As mentioned previously, the concepts of language were one of the biggest points of uncertainty 0in the search process. This is why they tweaked their algorithm to treat pixel data inside the images as its language, so as to recognise them more accurately. The process involves breaking down the image to its principal features, thus recognising what is inside the image as opposed to what the image was made of. This includes characteristics such as shapes, colours, and other minute details. Using AI, the website was able to implement a reverse image search that found similar images based on the look and feel. This is available in text form as well, with the user selecting the most relevant results to deepen the search. The handy feature also made it into a Chrome extension called Shutterstock Reveal, which finds a similar royalty-free photo to the search query in seconds. The technology was also extended to include composition search algorithms, which allowed users to place keywords in a canvas to determine the placement of image objects in the frame. In a boon to designers everywhere, they also created a utility to allow users to pick exactly where and how much white space they needed for inserting text. This feature, known as Copy Space, was released alongside the other features for use by everyone. The Democratisation Of Easily Accessible CV Algorithms Shutterstock then proceeded to make this available for purchase by end users, allowing them to utilise the API for this service to allow for the service to be used on any site. This is indicative of the larger growth that is seen in the content creation space for better image recognition algorithms. The services can also be used by many other types of companies, with other products also being available on the market. A similar service is Clarifai, which also provides image recognition services on an API call. These algorithms can be used in various applications, including providing feature fit images for creative design companies, e-commerce for clothing and identifying human movement for retail analytics. Computer vision is one of the fastest evolving fields in deep learning today, with many giants such as Amazon and Google duking it out over the leadership role in the field. However, smaller companies such as Shutterstock and Clarifai provide an accessible, light and easy image recognition algorithm, which is sure to increase their adoption.
Image recognition is used in numerous applications across verticals. And one of the most prominent users for them are stock websites. Shutterstock is the most prominent example of this, that adopted artificial intelligence in their operations late last year. Their new feature known as Shutterstock Showcase heavily uses AI to perform a variety of tasks […]
["AI Features"]
["Computer Vision", "Neural Networks", "Shutterstock"]
Anirudh VK
2019-04-08T10:04:52
2019
775
["Go", "Shutterstock", "artificial intelligence", "AI", "neural network", "image recognition", "computer vision", "Aim", "deep learning", "analytics", "Computer Vision", "R", "Neural Networks"]
["AI", "artificial intelligence", "deep learning", "neural network", "computer vision", "analytics", "Aim", "image recognition", "R", "Go"]
https://analyticsindiamag.com/ai-features/how-neural-networks-are-helping-designers-find-the-right-stock-image/
3
10
1
true
true
true
10,172,029
Midjourney Debuts First Video AI Model to Pave Way for Real-Time Imagery
Midjourney has released its first-ever video generation model, marking a significant shift from still imagery to animated visuals. The V1 Video Model, launched on Thursday, introduces an image-to-video feature that animates user-generated images using automatic or manual prompts. The feature allows users to click the ‘Animate’ option on any image and choose between low and high motion settings. While low motion suits ambient scenes with subtle movement, high motion adds dynamic camera and subject motion, though with a greater chance of visual glitches. Each animation job generates four five-second clips, which can be extended. “Once you have a video you like, you can ‘extend’ them—roughly four seconds at a time—four times total,” the blog post read. The company calls this launch a stepping stone toward its long-term goal of enabling “real-time open-world simulations”, where AI-generated characters and environments are interactive and move naturally in 3D space. “Our goal is to give you something fun, easy, beautiful, and affordable so that everyone can explore. We think we’ve struck a solid balance. Though many of you will feel a need to upgrade at least one tier for more fast-minutes, the company’s blog post stated. Users can also animate external images by uploading them and entering a custom motion prompt. Midjourney aims to keep the process simple, creative, and affordable, though it notes the computational cost is about eight times that of a typical image job. That said, each second of video is priced comparably to one image upscale—over 25 times cheaper than similar tools currently available, according to the company. The new tool is initially available on the web, with pricing and server capacity expected to evolve in the coming weeks. A “relax mode” for video generation will also roll out for Pro subscribers and above. With this release, Midjourney joins the growing list of generative AI platforms exploring video, alongside Google Veo, OpenAI’s Sora, and Runway. This indicates an accelerating race to bridge static imagery and dynamic storytelling.
The company, which has expertise in image generation capabilities, is adding video generation features to evolve its offering.
["AI News"]
["MidJourney"]
Ankush Das
2025-06-19T14:33:12
2025
327
["Go", "MidJourney", "OpenAI", "AI", "programming_languages:R", "programming_languages:Go", "Aim", "generative AI", "CLIP", "R"]
["AI", "generative AI", "OpenAI", "Aim", "R", "Go", "CLIP", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/midjourney-debuts-first-video-ai-model-to-pave-way-for-real-time-imagery/
2
9
0
false
false
false
60,893
7 Best Resources To Learn Facial Recognition in 2024
What is Facial Recognition? Facial recognition is arguably the most talked-about technology within the artificial intelligence landscape due to its wide range of applications and biased outputs. Several countries are adopting this technology for surveillance purposes, most notably China and India. Both are among the first countries to make use of this technology on a large scale. Even the EU has pulled back from banning this technology for some years and has left it for the countries to decide. This will increase the demand for professionals who can develop solutions around facial recognition technology to simplify life and make operations efficient. Best Resources to Learn Facial Recognition Analytics India Magazine has curated the top resources from where you can learn facial recognition technology to carve a successful career in this field:- 1. Convolutional Neural Networks Data science influencer Andrew Ng, along with teaching assistants from Stanford University, have devised a course that includes neural style transfer which enables working with facial images effectively. This is a technique that manipulates digital images or videos to replicate other images. The course is designed to help learners gain basic knowledge of CNN, before teaching them about facial recognition and object detection. The four weeks’ course includes almost 7 hours of video lessons and other reading materials. With a 4.9 rating, the course is one of the best to learn facial recognition technology. 2. Computer Vision & Image Analysis Computer vision and image analysis is an advanced course which goes beyond facial and object detection, and semantic segmentation models. Consequently, it requires some prerequisites, such as Introduction to AI and Deep Learning Explained before you can understand the advanced techniques of CNNs. However, it also teaches classical machine learning and deep learning techniques with some of the popular libraries like Scikit-Image, Scikit-Learn, Keras, PyTorch, OpenCV, and more. 3. Handbook Of Face Recognition The book is intended for practitioners and students who want to get started with facial recognition. The lessons address some of the longest standing predicaments of the technology. These are related to privacy and other technical challenges of building a face recognition system. Besides, it also teaches various statistical learning methods, such as AdaBoost for non-frontal face detection. This book is a must for any beginner who wants to make facial recognition solutions. This is because it mostly educates them on the importance of privacy protection and guaranteeing visual privacy with the products. And since privacy concerns related to the technology are at an all-time high, the book is essential for anyone who is developing solutions using this technology. 4. Introduction To Deep Learning For Face Recognition This blog post works as an index to numerous learning resources for facial recognition technology. It will redirect you to several research papers, books, and surveys carried out about the technology. While the blog includes historical developments associated with facial technology – which dates back to 1991 – it is also updating the content by adding links to new advancements in the space. 5. Deep Learning: Face Recognition Deep Learning: Face Recognition is hosted on LinkedIn Learning by Adam Geitgey, who teaches the techniques to tag images through facial recognition. The course also focuses on analyzing a histogram of oriented gradients (HOG), lactating facial features, finding lookalikes, and generating face encoding automatically. But the course is only an intro to facial recognition. To learn more advanced techniques, you can enrol in OpenCV for Python Developers. 6. Introduction To Computer Vision The course is yet another introduction to facial recognition hosted by Georgia Tech. It teaches you right from the beginning of computer vision techniques such as filtering, edge detection, and more, before introducing advanced technologies like image to image projection, motion models, etc. Apart from the algorithm, you will also get to learn the mathematics behind the facial technology, like Fourier transformation, matrix, and Bayes filters. Since it covers every requirement, the approximate time to complete all 70 lessons is around four months. 7. Deep Face Recognition This is one of the most-cited research papers of facial recognition. It goes deep into the manipulation of network architecture and optimizing loss functions for making state-of-the-art facial recognition models. Besides, it categories the two face processing methodologies — one-to-many augmentation and many-to-one normalization — and compares the outcome depending on the types of input image data. Resources SQL Certification Courses NoSQL Database Certification Courses Best AI Art Courses for Designers Best Online Data Engineering Courses Best Ethical Hacking Courses for Free
What is Facial Recognition? Facial recognition is arguably the most talked-about technology within the artificial intelligence landscape due to its wide range of applications and biased outputs. Several countries are adopting this technology for surveillance purposes, most notably China and India. Both are among the first countries to make use of this technology on a […]
["AI Trends"]
["facial recognition India", "fourier transform machine learning"]
Rohit Yadav
2020-04-03T19:00:00
2020
744
["data science", "machine learning", "artificial intelligence", "Keras", "AI", "fourier transform machine learning", "neural network", "PyTorch", "computer vision", "facial recognition India", "deep learning", "analytics"]
["AI", "artificial intelligence", "machine learning", "deep learning", "neural network", "computer vision", "data science", "analytics", "PyTorch", "Keras"]
https://analyticsindiamag.com/ai-trends/top-7-resources-to-learn-facial-recognition/
4
10
0
true
false
true
10,003,506
Building a classifier – How I solved the Blog Authorship Corpus” problem on Kaggle.
One of the most widely done tasks in machine learning is the classification where a predictive model is built to classify things among different classes. But do you think it is possible to classify different features of an author from writings like blogs and articles? Several texts are written on the internet in the form of articles, blogs, etc. This is the reason it has now become difficult to predict anything about the writer/ author without knowing him. Through this article, we will try solving this problem by building a classifier that would be able to predict multiple features such as Age, Gender, Astrological sign and Industry about the author from his texts. This problem is also listed as “Blog Authorship Corpus” on Kaggle. What you will gain from this article? How to solve the blog authorship corpus challenge? How to download and load the huge corpus for the task? How to do pre-processing of the textual corpus? How to build a model that will predict the features of the author? The Dataset The data set can be directly downloaded from Kaggle. It consists of posts of 19,320 bloggers that were collected in August 2004 from blogger.com. It has a total of 6,81,288 posts and 140 million words. All the bloggers fall into 3 age groups that are (13-17), (23-27), and (33-47). We will be using Google Colab for the task whereas if you want you can work with other IDE’s as well. Implementing Author Feature Prediction Let us quickly import all the required libraries that are required. Use the below code to do the same. import pandas as pd from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.preprocessing import MultiLabelBinarizer from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import average_precision_score from sklearn.metrics import recall_score After building the model we now load the data set and print the first 10 rows of the data. When we did a bit of EDA over the data we found there were no missing values and overall there are a total of 681284 rows and 7 columns. data = pd.read_csv(‘blogtext.csv) data.head(10) We will be working only with 3000 rows to build the model to reduce the errors and once we are done with the model building we can build the model using the whole data. After selecting only the 3000 rows we will do the pre-processing of the data – removing unwanted characters, converting text to lowercase, strip, and splitting the text followed by stopwords removals. data_new = data_new[:3000] data_new['text'] = data_new['text'].str.replace('[^A-Za-z]',' ') data_new['text'] = data_new['text'].str.lower() data_new["text"] = data_new["text"].str.strip() data_new["text"] = data_new["text"].str.split() from nltk.corpus import stopwords stopwords = set(stopwords.words('english')) data_new.text = data_new.text.apply(lambda x: ' '.join([word for word in x.split() if word not in stopwords])) The output of clean text after stopword removal is given below. We will now merge the labels so that we have all the labels for a particular sentence in one column. After merging all the labels you will see the transformation as shown in the image. Use the code below to merge the labels. data_new['age'] = data_new['age'].astype(str) data_new['labels']=data_new[['gender','age','topic','sign']].apply(lambda x:','.join(x), axis = 1) merged_data=data_new.drop(labels =['date','gender', 'age','topic','sign','id'], axis = 1) merged_data.head() We will then define the dependent and independent features X and y respectively. After defining X and y we will divide the data into testing and training. We will fit the training data on the model and do testing on the test data. Use the code given below for the same. X = merged_data['text'] merged_data['labels'] = merged_data['labels'].str.lower()= merged_data['labels'] X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.33, random_state = 43) After splitting the data we will now vectorize X and y by creating a bag of words and using a count vectorizer. After doing so we will transform X and y. Once we have transformed the X_train and X_test we will then convert the training and testing label using a multi-label binarizer. Use the below code to do the same. vectorizer = CountVectorizer(min_df = 2,ngram_range = (1,2),stop_words = "english") X_train = vectorizer.fit_transform(X_train) X_test = vectorizer.transform(X_test) vectorizer_labels = CountVectorizer(min_df = 1,ngram_range = (1,1),stop_words = "english") labels_vector = vectorizer_labels.fit_transform(labels) label_classes=[] for  key in vectorizer_labels.vocabulary_.keys(): label_classes.append(key) MLB = MultiLabelBinarizer(classes = label_classes) Before applying a multi-label binarizer we need to convert the labels in a format that is accepted by multi-label binarizer. We will do the same using the below code and will transform both the labels. y = [["".join(re.findall("\w",f)) for f in lst] for lst in [s.split(",") for s in y]] labels_trans = mlb.fit(labels) y_train = [["".join(re.findall("\w",f)) for f in lst] for lst in [s.split(",") for s in y_train]] y_train = mlb.transform(y_train) Y_test = [["".join(re.findall("\w",f)) for f in lst] for lst in [s.split(",") for s in y_test]] y_test_trans = mlb.transform(y_test) After the labels have been transformed using a multi-label binarizer we will then define a classifier we will use OneVsRestClassifier that is based upon the One-vs-Rest approach. As a basic classifier, we will be using LogisticRegression. The training may take time because of the large volume of data. After initiating the classifier we will then fit the training data and check the training accuracy. Use the below code to do the same. Classification Model for Author Feature Prediction clf = LogisticRegression(solver = 'lbfgs',max_iter = 1000) clf = OneVsRestClassifier(clf) clf.fit(X_train,Y_train) print("Training Accuracy:",clf.score(X_train,y_train)) Model Evaluation After training, we will now make predictions over the testing data and then evaluate the model performance using different metrics. Use the below code to do the same. y_pred = clf.predict(X_test) print("Test Accuracy:" + str(accuracy_score(y_test,y_pred))) print("F1: " + str(f1_score(y_test,y_pred))) print("F1_macro: " + str(f1_score(y_test,y_pred))) print("Precision: " + str(precision_score(y_test,y_pred))) Author Feature Predictions by the Model We will now check a few of the predictions and compare them with the original labels. We compared predictions for 2 sentences and the model had correctly predicted the labels. print(" Predicted :",y_pred[24]) print(" Actual :",y_test[24]) print(" Predicted :",y_pred[55]) print(" Actual :",y_test[55]) Conclusion We can conclude that the build model using OneVsRestClassifier did not have much good accuracy but for the 2 predictions, we made those were predicted correctly by the model. You can also try building the model by using different classifiers to improve the accuracy of the model. In the end, we also evaluated our model performance using metrics like precision, recall, and F1 score where we got an F1 score of 75 and a precision of 82 that are much satisfactory.
Through this article, we will try solving this problem by building a classifier that would be able to predict multiple features such as Age, Gender, Astrological sign and Industry about the author from his texts.
["Deep Tech"]
["Deep Learning", "Natural Language Processing", "Neural Networks"]
Rohit Dwivedi
2020-07-29T12:00:00
2020
1,057
["Go", "machine learning", "TPU", "AI", "Natural Language Processing", "ML", "R", "RAG", "Colab", "NLTK", "Deep Learning", "Pandas", "Neural Networks"]
["AI", "machine learning", "ML", "Colab", "NLTK", "Pandas", "RAG", "TPU", "R", "Go"]
https://analyticsindiamag.com/deep-tech/how-to-predict-authors-features-from-his-writings/
3
10
1
true
true
false
10,060,277
NFTs in gaming: What’s good, what’s not
Last year, Non-fungible Tokens (NFTs) broke the internet. Video game publishers, artists and streaming companies are cottoning on to their game-changing potential. yeah i built @twitch it has millions of users.& gaming NFTs are way bigger.— Justin Kan (@justinkan) February 8, 2022 Blockchain gaming is a means of turning the digital assets inside video games (such as collectibles or cosmetic skins) into real-world assets in the form of NFTs. The inspiration behind introducing NFTs to gaming likely comes from the success of multiplayer games such as Runescape and World of Warcraft with thriving in-game economies. The gamers spend real money on grey markets to make unauthorised purchases of game accounts or items on third-party sites. The game Second Life made use of digital currencies over a decade ago. Likewise, CryptoKitties—the collectible virtual pets platform from Dapper Labs— has also anticipated NFTs in gaming. DLCs to NFTs When you buy a non-NFT skin in a game today, the record of your ownership is linked to your gaming account. The game developer is in complete charge of how your downloaded content (DLC) functions: how your ownership is authenticated, where you need to go to download it, and how the item works in the game. NFTs would externalise this process. So, for instance, instead of your Electronic Arts (EA) account data being used to confirm your ownership of a certain Sub Zero skin to Battlefield 2042, the game would check with an external blockchain to ensure that you’re the owner of that skin. This would also allow you to then transfer that skin (which is an NFT) to someone else, and Battlefield 2042 would be able to keep track of who owns it on the blockchain. Ubisoft and its Quartz programme is a high profile example of a company moving towards replacing standard microtransactions with limited-edition NFTs. The company calls its NFTs “digits,” and offers them in the form of cosmetics in Ghost Recon: Breakpoint for PC. Meanwhile, in the game Axie Infinity, players can earn a cryptocurrency called smooth love potion (or SLP) as they win battles and go on adventures. The cryptocurrencies can be exchanged for real-world cash. In fact, a 22-year-old was able to win enough SLP to buy him two houses. Pros Since NFTs can be stored on the blockchain, all players receive the “right to transfer” the digital collectibles they gain access to. Limited items bought from in-game marketplaces can retain their value for as long as there is demand for the item. The “play-to-earn” model is a form of compensation for the amount of time players spend accumulating in-game tokens. The potential of turning in-game rewards into actual cash is a major draw. The misinformation on #NFTs and blockchain gaming is like Blockbuster Video trying to maintain VHS. #NFTs and token economies give power, profit and automomy to communities.You may not like it but if you don’t see the future in virtual world economies, NGMI. $PYR #Metaverse pic.twitter.com/x8EgZkII8z— Vulcan Forged (@VulcanForged) February 7, 2022 Cons Many of the proposed benefits of NFTs in gaming—such as the ability to trade “skins”—are feasible without the addition of NFTs. The argument for the implementation of blockchain comes across as superficial at best. "but what if you could take your Digits from one Ubisoft game to another —"yes this is entirely possible without NFTs and the limiting factor is getting a bunch of different studios to integrate the same art/code assets into their games, not keeping track of ownership— Adi Robertson (@[email protected]) (@thedextriarchy) December 7, 2021 The “play-to-earn” model, where players can acquire and lose assets, may promote gambling. The ability for users to consolidate their money into a single expensive asset and for individuals to have access to in-game tokens that correspond to real-world assets could lead to market manipulation and other breaches of securities regulations. People might spend more time trying to sell each other stuff than actually playing games. Most importantly, core gamers themselves don’t seem too excited about NFTs in games. The reaction towards Ubisoft’s “digits,” for example, was overwhelmingly negative. YouTube’ NFT announcement garnered more than 40,000 dislikes and only 2,000 likes—and as of December 2021, only twenty NFTs were sold. As noted (and as has been covered by the gaming press aggregating this all day), very few Ubisoft NFTs (aka Quartz Digits) have sold. Marketplace Rarible lists 9 sales since the 15th (and none since the 16th)Four of those sales were to the same account— Stephen Totilo (@stephentotilo) December 21, 2021
Ubisoft and its Quartz programme is a high profile example of a company moving towards replacing standard microtransactions with limited-edition NFTs.
["IT Services"]
[]
Srishti Mukherjee
2022-02-13T16:00:00
2022
746
["Go", "programming_languages:R", "AI", "programming_languages:Go", "Git", "BERT", "llm_models:BERT", "R"]
["AI", "R", "Go", "Git", "BERT", "llm_models:BERT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/nfts-in-gaming-whats-good-whats-not/
3
8
1
true
true
false
14,022
IPL 2017: Big Data is at the crease again — set to play another long inning
Image source: VIVO IPL 2017 IPL 2017 is around the corner again and the 10th edition is expected to be bigger and better. Analytics India Magazine rounds up the big data and analytics happenings for this edition that is set to begin tomorrow and lists down some of the most popular analytics tools used in cricket analytics. Big data and analytics is no longer a stranger to the world of cricket; in fact, it is reminiscent of Brad Pitt-starrer Moneyball, helping formulate the right team and pitching the path to success. SAP Labs India even hoisted Shah Rukh Khan owned KKR script a resounding success, by clinching the IPL trophy in 2014 via custom developed SAP auction analytics, Game analytics and post-game analysis that helped the team improve their performance tremendously. India’s foremost cricket analyst S Ramakrishnan, also the founder of Chennai-headquartered Sports Mechanics Pvt Ltd, a sports analytics firm has helped several IPL teams Mumbai Indians, Delhi Daredevils, Chennai Super Kings, Deccan Chargers, Royal Challengers Bangalore among others in Performance analytics and Fan Engagement as strategic analyst partners. SunRisers Hyderabad snapped two Afghan players during auction It all starts with the IPL auction, creating a better team through data The IPL auction 2017 took place in Bangalore in February. According to Cricmetric, Seattle based advanced cricket analytics provider, out of the 351 players, 66 garnered top bids and were up for grabs in the hotly contested bidding process. Cricmetric has outlined a few trends how the eight franchisees used big bucks to firm up their squad: Pacers were the most sought after commodity at the auction: Kagiso Rabada was snapped up by Delhi Daredevils for INR 5 crore; Trent Boult went for the same amount to Kolkata Knight Riders; Pat Cummins to Delhi Daredevils for INR4.5 crore. Other high-profile names include Chris Woakes bought by Kolkata Knight Riders for INR 4.2 and Nathan Coulter Nile for a tune of INR 3.5 crore. One reason why the IPL team owners decided to stock up on pacers was that the Indian pitches are not deemed very spin friendly and pacers could become match winners. Image source: VIVO IPL 2017 Underdogs become overnight millionaires: A recurring phenomena in IPL seasons, where ordinary players with excellent track records are plucked out of obscurity. Well, for IPL 2017, some names that attracted big price tags are Thangarasu Natarajan, whose base price was INR 10 lakhs and was snatched up  by Kings XI Punjab for INR 3 crore; Mohammed Siraj, Hyderabad pacer was picked up by Sunrisers Hyderabad for INR 2.6 crore while Karn Sharma was bought by Mumbai Indians for INR 3.2 crore. Two time champions vs Contender: According to Predict 22’s analysis, a cricket analytics website, powered by AI, two time IPL winner Mumbai Indians retained most of their 2016 squad and bought backups for most positions. They also added Aussie fast bowler Mitchell Johnson to their bowling arsenal. Meanwhile, Shane Watson led Bangalore franchisee bolstered their bench strength by buying “lot of players at minimum price of INR 10,00,000.” Virat Kohli led team also has an aggressive batting lineup and strong spin bowlers. Analytics solutions offered in IPL RCB has come close to winning, being runners-up last year but never clinched the trophy According to experts, a lot of big data analysis is used for formulating game strategy and post-match reviews. Analytics also give a big hint regarding the match outcomes.  According to Tyrone Systems, a leading provider of storage solutions, sensors in the pitch provide truckloads of data per match, which when woven with legacy data makes for an interesting perspective. The teams that do their analytics homework perfectly have a greater chance of winning. WASP: Winning and Score predictor (WASP) is one of the most popular tools used in cricket for predicting results in limited over match. The WASP system looks at data from previous matches, and estimates the probability of runs in each game and evaluates the probability of winning or total runs to be made by a team. Some of the other factors that are looked at boundary size, weather and pitch. Win Probability Statistic:  Seattle based Cricmetric has rolled out advanced metrics to analyze limited over cricket matches, and the performance of cricket players. According to Cricmetrics analyst, there is a model that calculates the win probability of a team in a limited over match in real time by factoring in historical data of previous matches. Through the win probability statistic, they are able to measure the contribution of each player and it helps in assessing player performances. The cricket analytics provider also claims its insights go beyond the traditional statistics used in the game, such as batting average or strike rate, for example. Team Performance and Talent Scouting: Dubbed as the performance analyst of the Indian cricket team since 2003, Sports Mechanics has contributed immensely to Indian cricket through insights related to team performance and fan engagement. Sports Mechanics, a pioneer in sports analytics first introduced video-based learning in sports and performance analytics. Another sports analytics organization, Sportingmindz introduced ‘22yardz’ — a cricket analysis Software designed to analyze the different aspects during live matches. Besides detailed statistics, it also gives video analysis. Qlik powering Visual Analytics: Qlik rolled out a new application for IPL 2016, developed on Qlik Sense that provided player performance, team analysis and league standings to viewers. The IPL application allowed cricket fans to engage in self servce data analysis and stay on top of their favourite teams and players. Adoption of analytics in cricketing arena Sports analytics has become a powerful strategic tool used by major sports teams. Opportunities in big data and pro sports analytics is now ever greater than usual. The 2012 Academy award winner movie Moneyball brought analytics to major sports arena and the availability of big data to answer questions related to sports performance. Cricket is the second most popular sport in the world with a fan base of 2.5 billion. In cricket, one has access to very detailed data with results of every ball that has been thrown in the last 20 years at hand. This very detailed data can power great insights and solutions.
IPL 2017 is around the corner again and the 10th edition is expected to be bigger and better. Analytics India Magazine rounds up the big data and analytics happenings for this edition that is set to begin tomorrow and lists down some of the most popular analytics tools used in cricket analytics. Big data and […]
["IT Services"]
[]
Richa Bhatia
2017-04-04T10:43:59
2017
1,028
["big data", "Go", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "Aim", "analytics", "GAN", "R"]
["AI", "analytics", "Aim", "RAG", "R", "Go", "big data", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/ipl-2017-big-data-is-at-crease-against-set-play-another-long-innings/
3
10
0
false
true
true
44,984
Meet AutoGAN, The Neural Architecture Search for Generative Adversarial Networks
Deep Learning has made tremendous progress over the last few years. Researchers have been designing neural net architectures manually and have been successful in several complex tasks such as speech recognition, emotion detection, image and video classification, object detection, machine translation, and much more. However, manually created models are somewhat time-consuming and always have a tendency of errors. This shortfall has led the researchers to take the next step of automating machine learning technique in the form of Neural architecture search (NAS). This method has outperformed other manually created neural net models. NAS is a subfield of AutoML and has been used for automating the design of deep neural networks which can outperform human-made neural net models. Recently, researchers from Texas A&M University and MIT-IBM Watson AI Lab developed an architecture known as AutoGAN by introducing the Neural Architecture Search algorithm to the GAN’s architecture. This model is said to outperform the existing state-of-the-art manually created General Adversarial Network (GAN) models. Human created GAN models are mostly unstable and prone to collapse, this is the reason why the researchers merged the architecture of NAS into the training process. Behind the Model AutoGAN is based on a multi-level architecture search strategy, where the generator is composed of several cells. In this model, the search space is defined in order to capture the GAN architectural variations and to assist this architecture search, an RNN controller is being used. Basically, AutoGAN follows the basic idea of using a recurrent neural network (RNN) controller to choose blocks from its search space. Then, the model is introduced to three key aspects of Neural architecture search (NAS) which are the search space, the proxy task and the optimisation algorithm. Dataset Used In order to achieve competitive image generation results against current state-of-the-art hand-crafted GAN models, the researchers used two datasets, CIFAR-10 and STL-10. CIFAR-10 consists of 50,000 training image and 10,000 test images, where each image is of 32 × 32 resolution and the training set is used to train the AutoGAN model without any data augmentation. The STL-10 dataset is used to show the transferability of the discovered architectures of AutoGAN model. On the CIFAR-1 dataset, AutoGAN obtained an Inception score of 8.55 and Frchet Inception Distance (FID) score of 12.42. On both datasets, AutoGAN established new state-of-the-art FID scores. How Is It Better The AutoGAN framework employs a multi-level architecture search (MLAS) strategy by default. This model can identify highly effective architectures on both CIFAR-10 and STL-10 datasets, achieving competitive image generation results against the current state-of-the-art, hand-crafted GAN models. In terms of inception score, AutoGAN is slightly next to Progressive GAN, and surpasses many latest strong competitors such as SN-GAN, improving MMD-GAN, Dist-GAN, MGAN, and WGAN-GP. In terms of favourable performance under other GAN metrics, e.g. Frchet Inception Distance (FID), the model outperforms all current state-of-the-art models. Limitations of AutoGAN According to researchers, the key challenge lies in how to further improve the search algorithm efficiency Due to the high instability and hyperparameter sensitivity of GAN training itself, AutoGAN appears to be more challenging than NAS for image classification Finding an appropriate metric to evaluate and guide the search process is another difficult task encountered by the researchers Despite the preliminary success, the researchers stated that the model needs improvement in future cases The current search space of the AutoGAN is limited The model is not tested on higher-resolution image synthesis such as ImageNet Similar Research Last year, researchers from Rutgers University and Perspecta Labs developed an AutoGAN model which counters adversarial attacks by enhancing the lower-dimensional manifold defined by the training data and by projecting perturbed data points onto it. The approach used a Generative Adversarial Network (GAN) with an autoencoder generator and a discriminator. Outlook Introduced by Ian Goodfellow in 2014, GAN or General Adversarial Network is one of the most popular approaches of neural networks. There are certain advantages of GAN such as this deep neural network does not require any labelled data while learning the internal representations of the data, generate data which are almost like real data, etc. It has proved to be very good at reconstructing manifolds of natural image datasets in original high-dimensional input spaces. In order to improve the quality of generated images, many researchers have put efforts and proposed several sophisticated neural network architectures.
Deep Learning has made tremendous progress over the last few years. Researchers have been designing neural net architectures manually and have been successful in several complex tasks such as speech recognition, emotion detection, image and video classification, object detection, machine translation, and much more. However, manually created models are somewhat time-consuming and always have a […]
["AI Features"]
[]
Ambika Choudhury
2019-08-26T13:00:53
2019
720
["Go", "machine learning", "AI", "neural network", "ML", "deep learning", "object detection", "RNN", "GAN", "R"]
["AI", "machine learning", "ML", "deep learning", "neural network", "object detection", "R", "Go", "GAN", "RNN"]
https://analyticsindiamag.com/ai-features/meet-autogan-the-neural-architecture-search-for-generative-adversarial-networks/
4
10
0
false
true
true
41,121
Union Budget 2019: India Inc Wants AI Centers, Robust IT Policy & Research Platforms
The Bharatiya Janata Party’s (BJP) euphoric win in the 2019 Lok Sabha elections means that Narendra Modi will be India’s Prime Minister for a second term. However, along with the absolute win came the massive expectations from the startup ecosystem. Startup founders and VCs are clamouring for a simplified tax structure, improved bureaucracy and more state investment in startups. India Inc. has expressed their concerns over the sweeping reforms made under the last term of Modi’s candidacy. Regulations have seen startup ecosystem mainstays such as angel tax take a hit; a point of contention for many startup founders and managers. In addition to this, the all-encompassing compliance for the goods and services tax has also turned out to be a pain point for startups and enterprises alike. While these remain big issues for founders, VCs face a huge problem in existing angel tax regulations. With Modi coming to the term for another term, the general expectation from investors is that angel tax regulations will be loosened. Startups are expecting Modi to change the landscape and make India one of the premier markets for innovation. With a potential for growth, a young workforce and lots of markets that need innovation, India has the ingredients for a healthy startup ecosystem. As the last term carried with it many allowances and funding for startups, the expectations for this term is to build an ecosystem to streamline business frameworks. Analytics India Magazine reached out to various players in India Inc. to know more about their expectations from this year’s Budget. Check out the industry’s sentiments below: Mayur Saraswat: Head of Digital, IT & Telecom Vertical, Teamlease Services. The expectation of the start-up industry from the upcoming budget is very high, especially as the industry is at an influx point. The startup ecosystem had attracted an FDI worth USD 239 billion in the last five years and has made India the second largest start-up hub in the world. The industry is hopeful that the upcoming budget will have policies and recommendations that will further strengthen the inflow. One of the key areas that can aid in this is clarity with regard to the taxes and the compliances that a start-up needs to adhere to. Apart from easing the financial burden, the clarity on taxes and the regulatory process will reduce the wastage of resources, smoothen the process and improve compliance. Clarity will improve the confidence of investors in the ecosystem. Further, the government should look at investing in a National Centre for Artificial Intelligence (AI) as it will help in setting up a data, skilling, re-skilling as well as research platforms, thereby helping to solve legal, regulatory and cybersecurity challenges. It will also look at how AI can be used in health care, education, and agriculture from a public systems delivery perspective. The investment in the national centre will also boost the segment and will bring efficiency and more jobs. Another aspect wherein intervention is required is the latest E-commerce policy, as the amendments to it is crucial to protect the interest of dependents or impacted parties. Though e-commerce players have started removing products from their websites and complying with changes, the move may decelerate the growth of the sector. Mahesh Makhija: Partner and Leader, Digital and Emerging Tech, EY. With the goal of making India a $5trillion economy by 2024, the government is likely to introduce measures in the budget to encourage the growth of start-ups, especially in new growth industries like AI and machine learning. We would expect the government to further expand on the measures announced in the interim budget and allocate additional funds to support deep tech segments like AI, Robotics and Machine Learning. Kushal Nahata: CEO & Co-Founder of FarEye. Budget 2019 should include regulations that will drive organisations to digitalise key logistics and supply chain processes. For instance, by mandating digitalisation of certain key accounting, billing, and logistics processes the government can ensure greater levels of compliance (especially with regard to environmental sustainability) and tackle corruption better. Also, this year’s budget should highlight the current state of eWay bill adoption. The pace of development of some crucial infrastructure remains slow. There is a need to speed up the development process of projects like the Dedicated Freight Corridor (DFC). We are also expecting announcements with regards to building integrated transportation hubs or Multi-Modal Logistics Parks. The government can plan to introduce special windows to help logistics startups compete with large technology providers when it comes to winning government tenders. Also, there is an urgent need to simplify GST, especially with regards to the logistics industry. Once multiple types of businesses are brought under an organized trade structure, supply chain organizations will be able to deliver a better value proposition to customers and hence boost revenue collections for the government. Deploying a uniform GST rate across the country is another initiative that the government needs to talk about in this year’s budget.” Supaul Chanda: Business Head, Digital, Teamlease Services. Budget expectations for the technology sector are three-pronged- boost production, encourage skilling and simplify regulatory requirements. The Government has taken a much-needed step via Make in India, but this initiative needs to be taken further by giving an additional boost to companies manufacturing electric devices, components and providing tax benefits to companies setting up local R&D centres. Amongst the biggest lacuna across the Indian IT industry, today are lack of skilled personnel- any initiatives focussing on this would be appreciated. Lastly, on the regulatory front, the government should focus on creating a more robust IT policy especially for regulating IT privacy laws and Blockchain so there is clarity for companies working with new age technologies to set shop in India. Gurprit Singh: Managing Partner and Co-Founder at Umbrella Infocare. We expect drastic announcements in this budget from Modi 2.0 government to Improve business growth and overall sentiments to achieve the vision of USD 5 Trillion economies by 2025 and the third largest economy by 2030. We expect reducing Income Tax for corporations and individuals, up to a maximum slab of 25% for all, make equity more tax-friendly by removing STT (this will pump in more liquidity in the system internally and through foreign investment), We also encourage investment in R&D to make India a high-value producer of goods and services, and the creation of friendly policies for sectors like IT, Tourism and more, Rashi Gupta: Chief Data Scientist & Co-founder, Rezo.AI Startups contribute substantiate growth to the economy. Hence, we urge the government to make early stage and growth capital more easily accessible for startups. Also, relaxation in the regulatory compliance procedures and development of incubation centres to aid employment generation and spur growth will open dynamics for startups to propel. Since one of the major challenges faced by startups is on regulatory and compliance front, therefore, regulations should be made more friendly for startup companies and compliance regulation should ease. Neel Juriasingani: CEO & Co-founder, Datacultr We believe that the funds allocated for the startups in the budget should be easily accessible to the startups incubated by the central or the state government. Besides, we expect that with the new budget, the government will introduce easy early stage funding and grants for tech start-ups working in the space of digital and financial inclusion. The government should also make sure that startups have a level playing field with other companies, more particularly listed companies where they can participate and win tenders for central and state government projects. Another key area that the GoI needs to address is GST compliance. We also expect the government to reduce tax rates creating a more welcoming ecosystem for the industry players. Policy regulations like these will allow entrepreneurs to devote their time, energy and resources to gain success and build upon more innovative ideas.
The Bharatiya Janata Party’s (BJP) euphoric win in the 2019 Lok Sabha elections means that Narendra Modi will be India’s Prime Minister for a second term. However, along with the absolute win came the massive expectations from the startup ecosystem. Startup founders and VCs are clamouring for a simplified tax structure, improved bureaucracy and more […]
["AI Features"]
["budget", "latest ai products across industries", "Modi", "Startups"]
Anirudh VK
2019-06-21T11:47:40
2019
1,298
["Go", "budget", "artificial intelligence", "machine learning", "AWS", "AI", "ML", "Git", "RAG", "latest ai products across industries", "analytics", "Modi", "Startups", "R"]
["AI", "artificial intelligence", "machine learning", "ML", "analytics", "RAG", "AWS", "R", "Go", "Git"]
https://analyticsindiamag.com/ai-features/union-budget-2019-india-inc-wants-ai-centers-robust-it-policy-research-platforms/
2
10
6
false
false
false
623
Analytics India Jobs Study 2012
The global market for business analytics software has grown impressively in the last two years, fueled by pervasive hype about “big data” as well as new technological innovations. In India, despite persisting economic uncertainties and slowing growth in IT sector, analyst predicts that analytics will remain in demand. To better understand the job potential in analytics area, Analytics India Magazine took up this study to appreciate the jobs created from the Indian analytics practice. Job Posting Trend The average monthly growth rate in job postings for analytics in India stands at 11% for last 1 year. This is pretty exciting given that the job growth in other areas is cited to be slowing down. After growing continuously in the second half of 2011, the job postings in analytics decreased slightly in the first quarter of 2012. New job openings picked up with full rigor in the month of May 2012 and June 2012. Months of January, May and June 2012 accounted for the most number of new job postings, each accounting for 13% of all jobs posted throughout the year. The growth rate in jobs posting was highest in the month of May 2012, accounting for 55% growth month on month. [pullquote align=”left”]Key Highlights of the Study Months of January, May and June 2012 accounted for the most number of new job postings, accounting for 38% of all jobs posted throughout the year. Average number of years of experience demanded by employers stands at 6 years. 38% of all job postings were for the designation of Manager/ Sr. Manager. Financial Analytics is the largest role demanded by employers (at 19%), followed by Risk Analytics (14%). Banking still is the largest industry for analytics professionals. New jobs posting stands at a incredible 50% from this industry. Bangalore and Gurgaon together accounts for 55% of all job openings.[/pullquote] Designation ‘Manager’ is the most demanded designation in this survey, indicating a severe dearth of middle level talent in analytics space. 38% of all job openings in analytics were for the post of ‘Manager’, followed by 26% for ‘Analyst’ and 11% for ‘Consultant’. Top layer talent, including ‘Director’ and ‘VP’ accounted for 12% of all job openings. Bangalore was the largest location for all designation levels in terms of number of job postings for each. Similarly, Banking was the largest industry in terms of number of openings in analytics across all designation levels, except for Director, where CPG industry was having the most job openings in the role of Market Research. For Manager post, Risk Analytics was the largest role offered and the rest of the designations were being offered for Financial Analytics as the biggest role. Role ‘Financial Analytics’ is the most demanded role in India (19%), followed by 14% for ‘Risk Analytics’ and 6% for ‘Marketing Analytics’. NCR takes the top slot for Risk Analytics, Consulting, Healthcare Analytics, Supply Chain Analytics, Telecom and HR Analytics. Mumbai posts the most jobs for Insurance Analytics. Rest of the roles goes to Bangalore as the largest location for number of job postings. Modelling roles require a relatively lower experience level; the average work experience requested was 5 years with largest designation being Analyst. Highest experience level was for Healthcare Analytics for the post of Manager. Industry Banking industry commands an overwhelming 50% of all job postings in analytics, despite a slated slowdown in this industry. Banking was the first adopter of analytics capabilities and has fairly matured in its usage of analytics proficiencies. Thus, currently analytics is not just a strategic ‘good-to-have’ competency in this industry but a requisite skill for efficient execution. Most of the jobs for Telecom came from Mumbai and in Healthcare industry the largest designation posted was that of Director with average 11 years of work experience. Location 35% of all job openings in analytics came from Bangalore. Individually, Gurgaon posted 21% of jobs and Delhi contributed 16%. NCR contributed 39% of all job postings. Hyderabad and Pune posted for mostly Market Research roles while Chennai posted for mostly Modelling roles and Insurance industry. For Infographics on this study, check this link. Photography by Meghna Bharadwaj [attachments title=”Download the Complete Report”]
The global market for business analytics software has grown impressively in the last two years, fueled by pervasive hype about “big data” as well as new technological innovations. In India, despite persisting economic uncertainties and slowing growth in IT sector, analyst predicts that analytics will remain in demand. To better understand the job potential in […]
["AI Features"]
[]
Дарья
2012-08-02T20:56:34
2012
690
["big data", "Go", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "RAG", "analytics", "R"]
["AI", "analytics", "RAG", "R", "Go", "big data", "innovation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/analytics-india-jobs-study-2012/
3
9
3
false
false
true
10,040,956
Leadership Lessons To Learn From Ants To Thrive In The Post-Pandemic Era
The pandemic came without a playbook, and the crisis has pushed the traditional practice of solving problems by collecting requirements, making assumptions, brainstorming ideas, and providing solutions to the curb. Post-COVID, it’s imperative to plan experiments to test your hypotheses and move to a more scientific problem-solving approach. But, how? Let us learn from ants – the most diligent creatures that optimize their time, resources, and skills to the fullest. How Ants Find Food? Experiment-based Intelligence of Nature There are two ways ants can obtain food. When an ant explores a path, it leaves pheromones (high-odour chemicals) for the next ant to follow its trail. It’s pure discovery-driven experimentation. A group of experimental ants will come out. Unable to find any pheromones, half of them will take the shorter route (straight path), and the other half will take the longer route.These experimental ants will collect food and return to the nest. The group following the shorter path will complete the round-trip faster and leave more pheromones behind.The follower ants will now travel back and forth along the stronger pheromone trace.Experimental ants that are on the longer course will soon change their path. And the pheromone deposits will begin to evaporate along the longer route. Soon every ant will travel along the straighter course. Unlike Primates, ants have no concept of long and short but can still find routes. This is a glimpse into the intelligence of nature. Lesson Learned: Move from Hypothesis-driven to Discovery-driven Method of Finding Solutions The new method of thinking today should be less driven by hypothesis and more by discovery. We must figure out a way for discovery-driven problem solving, not bound by constraints, because the solution might lie in the area where a hypothesis might not have reached. And when we try to ‘discover the solution,’ we must accept that we do not know the right course, eventually breaking the linear narrative. We’re often in our stick-in-the-mud tendency to zero in on outcomes based on our hypothesis and often lose the opportunity to find optimal solutions. It’s imperative that the validations must go side-by-side with experiments. Interestingly, on finding paths, the ants started experimenting and eventually discovered the ‘food.’ It validated that the shorter route is the best way to collect the food. Thus, the discovery-driven, problem-solving approach requires exploring options in a rapidly changing environment. How Do Ants Identify Risks? Work Culture Evolution without Central Control Ants have different roles. ‘Queen’ ant only lays eggs. ‘Foraging ants’ find food‘Patrolling ants’ ensure that there are no predators around. ‘Nest keepers’ remove debris from the nest and stack it outdoors. ‘Midden workers’ go out, collect the pile and build a big bank away from the ant colony. American Myrmecologist, Deborah M. Gordon, conducted experiments to ascertain ants’ behaviour on different external conditions. Deborah’s First Experiment: How Ants Respond to Uncertainty? In this experiment, Deborah planted predatory lizards near the colony. The following day, when the patrolling ants discovered the lizard as a threat outside, the entire colony kept hiding inside. Since ants can’t speak or see correctly, what can explain their intelligent behaviour? After the patrollers left, the foragers waited at the colony’s entrance for their return. Although ants’ memory span is only 10 to 15 seconds, these can sense the threat when too many or too few patrolling ants enter the colony. However, if the optimal number of ants returns, it’s okay. There is no central control – the Queen isn’t the boss. Ants interact locally to build an overall intelligent colony. In today’s cut-throat competition, organizations most often operate in the survival mode. But nature teaches more. Evolution has never been linear. It oscillates forward. However, we’ve primarily resorted to a work environment where central authority has complete control over decisions and growth. Imagine a site executive reports to the senior about a change he observed. Transcending through the hierarchy, it finally reaches the top boss. Again, the response cascades down the same hierarchy. But by then, the situation might change, and the final response would be inept. The command-control problem-solving approach solves the problems of yesterday. How Do Ants Switch Roles? Distributed Problem-solving Based on Utilizing Buffer, Local Experience & Multiple Interactions This is another view of the ants’ colony. 25% of ants work outside, and about 25% inside. Almost 50% are reserves. But they can also switch roles. Isn’t this a counterintuitive question raised by nature? Deborah’s Second Experiment: How Ants Respond to the Changing Food Availability? To Deborah’s surprise, when the food supply increased, there were suddenly more foragers. Assuming those were reserves, she tapped individual ants to find that the increased number of ants were not all foragers but mostly the nest maintenance workers, patrollers, and midden workers. The ‘reserve’ ants came out in different groups and exhibited imminent task fidelity. If there had been zero buffer, ants could neither have switched tasks nor have collected and stored excess food. Deborah’s experiment highlights a few critical tenets of nature’s intelligence about ants having a 50% buffer? Isn’t such a large buffer inherently wasteful? As a single entity, the ant was clueless about what to do. When the external conditions adapted to foraging, the colony’s dynamics changed. They adapted by working in parts to influence the whole group’s behaviour. Thus, nature chooses adaptability over efficiency. In the absence of a buffer, one can’t conduct experiments required to discover the optimum problem-solving approach. When you’re in a passive environment, perfection and efficiency are the right strategies. However, in rapidly changing conditions, you need to choose a ‘good enough’ approach over ‘perfection’ and ‘adaptable’ over ‘efficient.’ Key Takeaways for Business Leaders Business leaders should learn five important lessons from ants. Rapid experimentations lead to discoveries that drive problem-solving.Centralized control is replaced by a distributed, problem-solving approach based on local experience.Multiple interactions among agents drive the free flow of information.Buffer provides flexibility to respond to changing conditions.The whole is greater than the sum of its parts. Why is it Important to have a Distributed Problem-solving Approach in the Post-COVID Era? When our ancestors hunted and gathered in Africa, do you think they had a central command? Only a few existing languages during those times helped them communicate. Thus, humans hunted in large numbers without centralized control. But, when did we lose it? With the advent of agriculture, farmers could produce large quantities of food and store it for a long time. Eventually, property rights emerged, which led to countless disputes, and an organizational structure based on control became the norm. Then came the Industrial Revolution. It led to the division of labour to other extremes. That’s how the factory paradigm emerged. Every individual had a well-defined role based on the production line. In lack of frequent interactions, there was no scope of experimentation. Hence, there was little discovery-driven problem-solving. Then Karl Marx put forward the idea of communism, inspired by the command and control economy. This was followed by Ayn Rand’s phase, where aggravated men’s portrayal as hordes and the tenet of unfettered objectivism defined man as a heroic, omnipotent being. The protagonist in her books changed the way civilizations existed. But, nature doesn’t encourage absolute organizational authority or the individual virtuoso. Let Go Top-Down Management Approach As a mechanical engineer, I learned that efficiency is the correct approach for identical outcomes. However, in a constantly changing environment, we must learn from nature and rethink our organizational structure. Often, we’re unwilling to create a buffer, which contradicts what nature teaches us. Innovations should not be momentous achievements of the entire civilization but a way of our lives. The innovation processes are a combination of many inventors’ joint efforts—multiple interactions and learning based on local experiences, lots of experiments, and ample buffers. Post-COVID, traditional business models will no longer be relevant. Should we then restrict ourselves from merely observing the change? Absolutely not. Today, as leaders, we must learn from ants to embed the discovery-driven problem-solving approach and let go of the control approach. Instead, we must create buffers to experiment more to make the world better.
Post-COVID, it’s imperative to plan experiments to test your hypotheses and move to a more scientific problem-solving approach.
["AI Features"]
["leadership", "Tredence", "tredence analytics", "tredence hiring", "tredence learning and development programme"]
Shashank Dubey
2021-05-27T16:58:16
2021
1,342
["tredence analytics", "Go", "API", "programming_languages:R", "AI", "IPO", "innovation", "tredence learning and development programme", "RAG", "tredence hiring", "Ray", "GAN", "Tredence", "R", "leadership"]
["AI", "Ray", "RAG", "R", "Go", "API", "GAN", "innovation", "IPO", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/leadership-lessons-to-learn-from-ants-to-thrive-in-the-post-pandemic-era/
2
10
2
false
false
false
28,954
The Hortonworks-Cloudera Merger Is Actually Hadoop’s Obituary
In a move that signals the death of Hadoop and that the open-sourced software is no longer a key part of big data vendor’s strategy, two rival companies Cloudera and Hortonworks jointly announced a merger this week. They also announced a definitive agreement under which the companies will combine in an all-stock merger of equals. An official statement also revealed their roadmap to make Hadoop native to the cloud and usher in the development of next-gen data platform leader. This will notably be the industry’s first enterprise data cloud, giving ease of use and elasticity to the public cloud. What’s most surprising about the all-stock merger is that both the companies who open source with enterprise-ready Hadoop distributions operating in a similar space, realised the writing on the wall, in the nick of time. That the rise of managed data science services from AWS, Azure and Google made Hadoop less useful and was one of the key reasons behind the merger of the two big data pioneers. From Heavy Data Infrastructure To Cloud Both the big data giants which provided software around Hadoop flourished at a time when most projects were heavy data infrastructure-based. This was a decade ago when analysts performed data analysis on extremely large sets of data. Now, over the last few years, the advent of object cloud storage changed the big data market exponentially and users moved away from Hadoop stalwarts. According to Wikibon Lead Analyst James Kobielus, HDFS-based data lakes typify data-at-rest architectures which are no longer a part of enterprise data strategies anymore. So this is what happened — the world moved away to cloud adoption with data analysis services designed for the cloud era. Moreover, with the rise in public cloud object storage such as Google Cloud storage, Amazon S3, IBM Cloud Object Storage, and AWS Elastic MapReduce File System, dependency on HDFS has reduced drastically. So did the rise in object storage-as-a-service which provides a robust, scalable unstructured data store bring down the curtains on Hadoop. Over the years, object storage has become the core platform for big data solutions and it also provided several advantages such as access to large amounts of data, programmable storage interface. Analysts believe object storage will eventually be replaced by stream computing which will become the foundation of tomorrow’s data architectures. Hadoop Symbolised Big Data — Cloud And AI Brought Its End While the Hadoop ecosystem symbolised big data in the early days (a decade back), the last few years have seen a massive shift in data architecture with organisations heavily investing in serverless computing to tackle shifting workloads. To support new database architecture, there has been a rise in other open source projects like Kafka, Elastic and Flink among others. Besides, Google developed an open source container-orchestration system Kubernetes is also soaring in popularity and is used to manage Google-scale workloads. However, a major rise in cloud computing, spanning storage, managed services and open source activity upended the Hadoop market. Since both the companies operated in the same market — Cloudera also targeted data warehouses and Hortonworks provided solutions in edge computing and IoT, they can now work together to create “a superior unified platform and clear industry standard from the Edge to AI, substantially benefiting customers, partners and the community”. And Tom Reilly, Cloudera, CEO conceded in a statement that two businesses were complementary and strategic. “By bringing together Hortonworks’ investments in end-to-end data management with Cloudera’s investments in data warehousing and machine learning, we will deliver the industry’s first enterprise data cloud from the Edge to AI. This vision will enable our companies to advance our shared commitment to customer success in their pursuit of digital transformation,” he said. What Does This Signify For MapR? The Hadoop obituary was already written in 2017 when enterprises shied away from Hadoop distribution vendors, primarily big data pioneers Cloudera and Hortonworks which rose in 2005 from the open-source Apache project. However, questions are also swirling around MapR, another company that is an offshoot of the Hadoop era. Pegged as one of the most innovative products, MapR is well-known for its open source business model and also pushing the boundaries on databases, containers, file systems. According to a recent statement from the company, MapR was recognized for providing a data platform for AI and analytics, that enables enterprises to inject analytics into their business processes, thereby increasing revenue, reduce costs and mitigate risks that helps address the data complexities of high-scale and mission-critical distributed processing, across cloud to the edge, IoT analytics, and container persistence.
In a move that signals the death of Hadoop and that the open-sourced software is no longer a key part of big data vendor’s strategy, two rival companies Cloudera and Hortonworks jointly announced a merger this week. They also announced a definitive agreement under which the companies will combine in an all-stock merger of equals. […]
["AI Features"]
["cloud storage", "cloudera", "hadoop on azure cloud", "hadoop world", "is hadoop a company", "Kubernetes"]
Richa Bhatia
2018-10-04T13:03:47
2018
758
["data science", "cloudera", "hadoop world", "hadoop on azure cloud", "machine learning", "is hadoop a company", "AI", "cloud computing", "Kubernetes", "AWS", "Azure", "serverless", "RAG", "analytics", "cloud storage", "kubernetes"]
["AI", "machine learning", "data science", "analytics", "RAG", "cloud computing", "AWS", "Azure", "kubernetes", "serverless"]
https://analyticsindiamag.com/ai-features/the-hortonworks-cloudera-merger-is-actually-hadoops-obituary/
3
10
4
false
false
false
10,019,960
AI In Healthcare Is Challenging: What Can India Do?
“AI combined with robotics and the Internet of Medical Things (IoMT) could potentially be the new nervous system for healthcare.” NITI Aayog India’s think tank NITI Aayog, in its 2018 report, put healthcare on top priority in the list of domains that need an AI push. India is the perfect petri dish for enterprises and institutions globally to develop scalable solutions which can be easily implemented in the rest of the developing and emerging economies. Simply put, solve for India means to solve for 40% or more of the world. The report also anticipated AI application in healthcare can help overcome high barriers within the healthcare domain, particularly in rural areas that suffer from poor connectivity and limited supply of healthcare personnel. This is where AI-driven diagnostics, personalised treatment, early identification of potential pandemics, and imaging diagnostics come handy. However, AI-driven diagnostics is challenging. Firstly, data availability for different age groups, genders, and regions is not adequate to make a smart machine learning model. The biases show up in the results. Underrepresented communities translate to lesser columns, which in turn translates to inaccurate diagnosis for that group. Data is not inert, writes Rachel Thomas of fast.ai. “A software bug is not just an error in a line of code; it is a woman with cerebral palsy losing the home health aide she relies on in daily life,” warns Rachel. She also highlighted the systematic misrepresentation in medical datasets. For instance, diagnosis delays lead to incomplete and incorrect data at any one snapshot in time. Highlighting the woes of misrepresentation, Rachel said it takes five years and five doctors for patients with autoimmune diseases such as multiple sclerosis to get a diagnosis — of which three-quarters of patients are women. Whereas, the diagnosis of Crohn’s disease takes twelve months for men and twenty months for women. This leads to incomplete and missing data. Additionally, poorly understood diseases make matters worse for any organisation trying to leverage AI in a medical setup. To tackle the most pressing challenges, Rachel recommends to focus on five principles: Acknowledge that medical data can be incomplete, incorrect, missing, and biased. Recognise how ML systems can result in centralising power at the expense of patients and health care providers alike. Machine learning designers must emphasise on how new systems will interface with medical systems. Recognise that patients have their own expertise distinct from doctors. A shift of focus from bias and fairness to focus on power and participation. Rachel also insists on having a broad view of domain expertise. Though there is no doubt that the doctors with specialisations are critical to validate the models, she would like to see patients play a more strategic role in this effort. The feedback from patients will also help in building models that reflect reality in their results. “Data are not bricks to be stacked, oil to be drilled, gold to be mined, opportunities to be harvested. Data are humans to be seen, maybe loved, hopefully taken care of,” wrote Rachel quoting AI researcher Inioluwa Deborah Raji. Lessons From The FDA (Source: Google AI) Machine learning models are already getting good at helping radiologists. But how legit are these algorithms? Have they been thoroughly vetted by government bodies? For example, when Google AI, in partnership with the Ministry of Public Health in Thailand, conducted deep learning experiments in a handful of clinics, they found fundamental issues in the way the deep learning systems were deployed. Though the model improved regularly, the challenges came from factors external to the model. Software as a medical device or SaMD comes with lots of challenges. To address such issues, last month, the United States watchdog FDA published an action plan: FDA’s SaMD specifications describe “what” aspects the manufacturer intends to change through learning, and their Algorithm Change Protocol (ACP) explains “how” the algorithm will learn and change while remaining safe and effective. FDA will make sure the Good Machine Learning Practices such as data management, feature extraction, training, interpretability, evaluation and documentation are observed. FDA will chart out a patient-centred approach including the need for a manufacturer’s transparency to users about the functioning of AI/ML-based devices to ensure users understand the benefits, risks, and limitations of these devices. To tackle bias, FDA is collaborating with institutions such as Centers for Excellence in Regulatory Science, Stanford University, and Johns Hopkins University. FDA will support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis. When it comes to India, AI incorporation has to work around challenges such as shortage of qualified healthcare professionals and services and non-uniform access to healthcare across the country. India’s eHealth ambitions are yet to gain significant traction despite promising starts with initiatives such as National eHealth Authority (NeHA), Integrated Health Information Program (IHIP), and Electronic Health Record Standards for India. According to NITI Aayog, in India, AI adoption for healthcare applications is expected to see an exponential increase in the next few years. The healthcare market globally driven by AI is expected to register an explosive CAGR of 40% through 2021, and reach $6.6 billion this year. The think tank believes the advances in technology, and interest and activity from innovators will allow India to solve some of its long-existing challenges in providing appropriate healthcare to a large section of its population. AI combined with robotics and the Internet of Medical Things (IoMT) could potentially be the new nervous system for healthcare, presenting solutions to address healthcare problems and help the government meet mission critical objectives.
“AI combined with robotics and the Internet of Medical Things (IoMT) could potentially be the new nervous system for healthcare.” NITI Aayog India’s think tank NITI Aayog, in its 2018 report, put healthcare on top priority in the list of domains that need an AI push. India is the perfect petri dish for enterprises and […]
["AI Trends"]
[]
Ram Sagar
2021-02-10T18:00:00
2021
920
["Go", "machine learning", "AI", "ML", "Scala", "Git", "RAG", "deep learning", "GAN", "R"]
["AI", "machine learning", "ML", "deep learning", "RAG", "R", "Go", "Scala", "Git", "GAN"]
https://analyticsindiamag.com/ai-trends/ai-in-healthcare-is-challenging-what-can-india-do/
3
10
2
true
true
true
12,154
How Big Data advances are fuelling space exploration
Big Data has made big, impactful strides and now it has joined the “race for space”. While big data analytics had already been put to work in learning about dark matter, via data discovery techniques, statisticians and astrophysicists are applying advanced techniques to unlock the mysteries of universe. Case in point is the use of “automatic explorative analysis” data portfolio that assists in a) highlighting points of interest, b) performs analysis, c) visualizations and finally d) generating insightful reports. Powered by SAS® Visual Analytics, the “automatic explorative analysis” is how astrophysicists and astronomers are solving the “big bang” questions. The visual analytics tool, primarily for big data discovery, reporting and interactive exploration that works in memory was put to test by two researchers, Lars Daldorff and Siavoush Mohammadi who turned to standardized analytical solutions to explore the large amount of solar research project data from the plasma simulations they had conducted from NASA. Exploring the sun with big data analytics NASA can make use of Big Data to study solar magnetic loops Usually, supercomputers simulate the sun and produce large amount of data, however, the point of interest is situated at a specific point in time and space, making it a task to generate the necessary insights. The analytical tools combine computational power and statistics to deliver the most relevant information, significantly reducing the time. Moreover, this technology has the potential to help NASA with its research on solar magnetic loops, helping the organization produce insights faster. How NASA uses Big Data to crunch numbers NASA has made extensive use of Big Data driven analytical engines for their Curiosity Rover project. The underlying technology was an open source program, called Elasticsearch, one that also powers companies like Netflix and Goldman Sachs. Elasticsearch helps scientists at NASA to process all the data obtained from the Rover during its four scheduled uploads. These datasets revolve around several data points, including sensor readings about temperature on the surface of the Mars, atmospheric composition; and accurate data revolving around the Rover’s equipment, tools and actions. In future, NASA plans to build the world’s largest radio telescope. The establishment, which is due to begin in 2018 is called Square Kilometre Array (SKA), has been estimated to produce about 700 terabytes per second of data. Use Cases: How data from space is powering life on earth As the computing power of satellite advances – crunching 2 billion instruction per second from the edge of space, we are sitting on a goldmine of data that could help in preventing natural disasters and use natural resources wisely. 1) Reportedly, the world’s smallest high resolution imaging satellite is developed by Terra Bella, a satellite-operating firm that monitors terrestrial surfaces to track change. Google acquired Terra Bella, formerly Skybox Imaging, in 2014 and changed its name in 2016. Besides furnishing raw images, Terra Bella will cash in on Google’s geospatial data sources and machine learning capabilities to provide more services. 2) The Climate Corporation is a digital agriculture company that analyses the weather to help the  farmers across the world to adapt to climatic change. The organization was acquired in 2014 by Monsanto, a firm that specializes in agrochemical and agricultural biotechnology. 3) In 2010, a specialized team of ex-NASA scientists founded Planet, San Francisco headquartered satellite imaging company that operates the largest fleet of earth-imaging satellites, called doves, that photographs the earth’s surfaces. The main objective that underlies Planet’s value proposition in the space is its mission to image the entire Earth every day, and provide insights regarding any changes. Planet’s Imaging as a Service Platform, uses data from space to better life on earth. From measuring agricultural yields and monitoring natural resources, to averting natural disasters, Planet’s data has been put to work across various  sectors – defense and intelligence, energy & infrastructure, forestry, mapping, agriculture among others. An MOU has been signed between ISRO and Telangana Government 4) On the Indian front, the Telangana state government recently signed a pact with Indian Space Research Organisation (ISRO) to study their water resource information systems by implementing a satellite visualization platform. This move will facilitate the government to help the farming community assess the annual variability of water in surface runoff using the satellite-obtained data. Consequently, the state government can devise pre-planned solutions and create a water resources map. 5) Another Indian company, Dhruva Space is credited for developing and deploying satellites for non-telecom commercial purposes such as vehicle and flight tracking, disaster management, predictive analytics, and imaging. The firm makes extensive use of its satellite to collate earth data from outer space and relays it back to the earth for businesses and industries to churn value out of that data. Dhruva Space is making a headway in the landscape concerning the use of Big Data in space. Future of Big Data in furthering space research CubeSat Data management is just one piece of the puzzle, visualization and interpretation are major elements that help in understanding space research data. The surge in development and deployment of CubeSats (miniaturized satellites) and onboarding of faster communication technology has made space exploration slightly less challenging. According to reports, CubeSats can put several sensors into space that gives access to several data points. How to process all that data streaming is a challenge?  The real value of geospatial big data lies in powering the world’s economy. It can be achieved by combining traditional geospatial techniques with spatial behaviour and create model.
Big Data has made big, impactful strides and now it has joined the “race for space”. While big data analytics had already been put to work in learning about dark matter, via data discovery techniques, statisticians and astrophysicists are applying advanced techniques to unlock the mysteries of universe. Case in point is the use of […]
["IT Services"]
["geospatial data India"]
Amit Paul Chowdhury
2017-01-10T11:21:08
2017
908
["big data", "Elasticsearch", "Go", "machine learning", "AI", "R", "geospatial data India", "Git", "Ray", "analytics", "predictive analytics"]
["AI", "machine learning", "analytics", "Ray", "predictive analytics", "Elasticsearch", "R", "Go", "Git", "big data"]
https://analyticsindiamag.com/it-services/big-data-advances-fuelling-space-exploration/
3
10
2
false
true
true