Veranda Learning Solutions Limited (“Veranda”), a public listed EdTech company (BSE: 543514, NSE: VERANDA), announced that the Board of Directors at it’s meeting held on September,14, 2022 has approved subject to the approval of shareholders at the ensuing EGM on October, 6, 2022 a preferential issue to raise Rs 300 crores . This raise includes an investment of Rs. 61.40 crores to be subscribed by the promoters in the form of Convertible warrants.
The fundraise is through a mix of Preferential offer of equity shares and Convertible warrants both at a price of Rs. 307 per share.Each warrant is convertible into 1 Equity Share and the conversion can be exercised at any time within a period of 18 months from the date of allotment. 25% of the total consideration for Convertible warrants will be payable at the time of application.
It may be recalled that the Company has secured approvals from the shareholders at the Extra Ordinary General Meeting held on May, 27,2022 to raise debt in the form of NCDs/ Bonds and other instruments upto 1000 cr. This debt and equity fund raise together would be used to fuel in organic growth through acquisitions.
Speaking about the fund raise, Mr. Kalpathi S. Suresh, Chairman and Executive Director, Veranda Learning Solutions, said, “We are pleased with the response to the private placement and the success of the fund raise places Veranda in a unique position with the necessary war chest to fuel the next leg of growth. At Veranda, our objective is to provide the highest quality education possible at an affordable price. To that end, we are building an eco-system to strengthen our offerings through a judicious mix of high-quality content propelled by cutting edge technology which we believe will take Veranda to greater heights.”
The Workforce Institute at UKG Survey: More Than Half of Workers in India Wouldn’t Want Their Children to Have Their Job
The survey report titled ‘We can fix work’ entails a 10-country survey of employees, C-level leaders, and HR professionals which was done by The Workforce Institute at UKG
- The report launched on 9th December, 2022 at the UKG LIVE event happening in Sahara Star, Mumbai.
- They survey found that 52% of people would tell their children to pursue jobs in which they find ‘meaning’ instead of being completely driven by the pay scale.
- While money will continue to remain a driving factor when it comes to job choices, the coming generations definitely won’t regard it as the only factor.
Standing at the threshold of the future of work, The Workforce Institute at UKG, which provides research and education on critical workplace issues facing organizations around the world, surveyed employees and leaders across 10 countries to get a pulse of how they really feel about their jobs. According to the results, India ranked the highest with 66% of employees stating that they wouldn’t recommend their profession to their children or any young person that they care about, while 67% wouldn’t recommend their employers.
The full report, “We Can Fix Work,” provides insight into what parents, family members, and mentors are telling children about what they should value in their jobs and employers — urging future generations to let purpose, not money, guide career choices.
It found that on a global scale, nearly half (46%) of employees would not recommend their company nor their profession to their children or a young person they care about, and a startling 38% “wouldn’t wish my job on my worst enemy.”
“Employees and leaders alike, as has been found in this report, prioritise finding meaning in their work more than making money. We have to realise that with these shifting times, we are navigating towards a generation of workers who don’t necessarily rely on their job for survival: instead their work is more personal to them in terms of adding value to their lives, and fuelling their existing passions,” said Neil J Solomon, vice president, Asia Pacific and Latin America at UKG. “For a workforce such as this, we need to develop a workplace culture that nourishes and nurtures the overall development of its employees, takes care of their physical as well as mental wellbeing, appreciates their efforts, and maintains a mutual sense of respect with individuals at different levels of the organisation irrespective of hierarchies. This, right here, is the beginning of the future of work and employee centricity is at the heart of it.”
Workforce burnout: 45% of employees worldwide don’t want to work anymore, period
There has been a recent rise in the anti-work mindset, globally, owing to the pandemic as 77% of employees around the world want to spend less time working and more time doing things that matter to them. Amongst the C-suite leaders, it is the younger leaders that are ready to bow out of work completely, especially those belonging to the Gen Z (58%), who say they don’t want to work anymore. When compared to the C-suite leaders who are soon to be retiring from their jobs, 36% of the Millennial leaders and 33% of the Gen X leaders are ready to not work anymore. Therefore, a disinclination towards work is a phenomenon that is being observed across the ranks of employees and leaders alike.
Too much overtime affects the employee-employer relationship
If employees tend to work overtime more than twice per week, it strains their relationship with the employer and they’re even less likely to recommend their jobs or their companies to the next generation. This is evidenced by the more than half (58%) of employees, globally, who work overtime 3-4 times per week who wouldn’t recommend their profession to kids. 60% wouldn’t recommend the organisation. The report distinctly shows that more money does not equate to job satisfaction for individuals, as most people have a transactional relationship with work and only 23% of employees genuinely enjoy their work and are passionate about it. In fact, 64% of them would switch jobs right now if they could.
With purpose and trust, 88% of employees look forward to work
Now more than ever, companies must prioritise the wellbeing of their employees, not just for better outcomes in the present, but for their long-term sustainability in the future. Employees in India topped the global charts with a staggering 89% saying that they are committed in their pursuit of greater purpose at work — most of any country surveyed.
What does great look like?
Great Place To Work research finds people at the best workplaces around the world are living in a vastly different — and more fulfilling — reality than the typical employee, starting with the sense of purpose they find in their work. For those at the best workplaces:
- 90% feel like they can be themselves
- 88% look forward to going to work
- 85% believe their work has special meaning
- 85% enjoy psychologically healthy work environments
What’s more, rather than warn loved ones away, 89% of people at these best workplaces would “strongly endorse” their organizations to friends and family.
The full report, “We Need to Fix Work,” examines feedback from 2,200 employees surveyed in partnership with Workplace Intelligence across Australia, Canada, France, Germany, India, Mexico, New Zealand, the Netherlands, the U.K., and the U.S., as well as 600 C-suite leaders and 600 HR executives in the U.S.
Microsoft and LinkedIn engage 7.3 million learners in India through Skills for Jobs program, to help 10 million people learn digital skills
Will provide free access to 350 courses, six new Career Essentials Certificates, and 50,000 LinkedIn Learning Scholarships
Microsoft and LinkedIn announced the next step in the Skills for Jobs program, providing free access to 350 courses and six new Career Essentials Certificates for six of the most in-demand jobs in the digital economy. Microsoft and LinkedIn will also be offering 50,000 LinkedIn Learning scholarships to help people get ahead in their skilling journey. By 2025, Microsoft will help train and certify 10 million people with skills for in demand jobs. Today’s launch builds on the Global Skills Initiative, which helped 80 million jobseekers around the world access digital skilling resources.
To date, Microsoft has engaged 14 million learners in Asia via LinkedIn, Microsoft Learn and non-profit skilling efforts. Of this, 7.3 million learners were from India. The top six LinkedIn Learning Pathways in India were: Critical Soft Skills, Software Developer, Data Analyst, Financial Analyst, Project Manager, and Customer Service Specialist.
Using data from LinkedIn and the Burning Glass Institute, Microsoft analyzed job listings to determine six of the roles in greatest demand for the program: Administrative Professional, Project Manager, Business Analyst, Systems Administrator, Software Developer or Data Analyst. The new courses and certificates will be offered in seven languages, English, French, German, Spanish, Portuguese, Simplified Chinese, and Japanese. This expansion builds on Microsoft’s commitment to supporting inclusive economic opportunity so learners around the world have equitable access to the skills, technology, and opportunity needed to succeed in a digitizing economy.
Microsoft’s new commitment to offer skilling support for the most sought-after digital jobs is aimed at enabling people and organizations to seize job opportunities, gain a competitive edge and emerge as trailblazers – as they contribute to a vibrant tech ecosystem and accelerate innovation needed for growth.
Dr. Rohini Srivathsa, National Technology Officer, Microsoft India said, “Bridging the skills gaps in today’s digital economy is foundational to India’s employment challenges and building towards inclusive economic and societal progress in India. Microsoft has been invested in various initiatives to skill India’s youth, tapping into the potential of underserved communities and the opportunity to bring more women into the workforce. With our new commitment to help equip another 10 million globally with highly relevant skilling support, we want to continue making tech skills accessible to all, opening up employment opportunities for people to succeed and embrace innovation. We are privileged to be collaborating with LinkedIn and our partners in our local communities, to empower every person in India to be part of a growing digital ecosystem and to achieve more, together.”
The new Career Essentials Certificates are designed to help learners bridge the gap from basic digital literacy to more advanced technical skills training and gain certifications that will be valuable to securing employment. Once a learning pathway is completed, learners will receive a LinkedIn badge to denote their certificate and indicate fluency in the skillset to employers.
All courses are available on LinkedIn at opportunity.linkedin.com. In addition, Microsoft-developed courses are also available on Microsoft Community Training (MCT) and in downloadable format for use on other Learning Management Systems (LMS) for nonprofit partners.
Empowering social media users to assess content helps fight misinformation
An experimental platform that puts moderation in the hands of its users shows that people do evaluate posts effectively and share their assessments with others
Written by Adam Zewe, MIT News Office
When fighting the spread of misinformation, social media platforms typically place most users in the passenger seat. Platforms often use machine-learning algorithms or human fact-checkers to flag false or misinforming content for users.
“Just because this is the status quo doesn’t mean it is the correct way or the only way to do it,” says Farnaz Jahanbakhsh, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
She and her collaborators conducted a study in which they put that power into the hands of social media users instead.
They first surveyed people to learn how they avoid or filter misinformation on social media. Using their findings, the researchers developed a prototype platform that enables users to assess the accuracy of content, indicate which users they trust to assess accuracy, and filter posts that appear in their feed based on those assessments.
Through a field study, they found that users were able to effectively assess misinforming posts without receiving any prior training. Moreover, users valued the ability to assess posts and view assessments in a structured way. The researchers also saw that participants used content filters differently — for instance, some blocked all misinforming content while others used filters to seek out such articles.
This work shows that a decentralized approach to moderation can lead to higher content reliability on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms, she adds.
“A lot of research into misinformation assumes that users can’t decide what is true and what is not, and so we have to help them. We didn’t see that at all. We saw that people actually do treat content with scrutiny and they also try to help each other. But these efforts are not currently supported by the platforms,” she says.
Jahanbakhsh wrote the paper with Amy Zhang, assistant professor at the University of Washington Allen School of Computer Science and Engineering; and senior author David Karger, professor of computer science in CSAIL. The research will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.
The spread of online misinformation is a widespread problem. However, current methods social media platforms use to mark or remove misinforming content have downsides. For instance, when platforms use algorithms or fact-checkers to assess posts, that can create tension among users who interpret those efforts as infringing on freedom of speech, among other issues.
“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are exposed to, so they know when and how to talk to them about it,” Jahanbakhsh adds.
Users often try to assess and flag misinformation on their own, and they attempt to assist each other by asking friends and experts to help them make sense of what they are reading. But these efforts can backfire because they aren’t supported by platforms. A user can leave a comment on a misleading post or react with an angry emoji, but most platforms consider those actions signs of engagement. On Facebook, for instance, that might mean the misinforming content would be shown to more people, including the user’s friends and followers — the exact opposite of what this user wanted.
To overcome these problems and pitfalls, the researchers sought to create a platform that gives users the ability to provide and view structured accuracy assessments on posts, indicate others they trust to assess posts, and use filters to control the content displayed in their feed. Ultimately, the researchers’ goal is to make it easier for users to help each other assess misinformation on social media, which reduces the workload for everyone.
The researchers began by surveying 192 people, recruited using Facebook and a mailing list, to see whether users would value these features. The survey revealed that users are hyper-aware of misinformation and try to track and report it, but fear their assessments could be misinterpreted. They are skeptical of platforms’ efforts to assess content for them. And, while they would like filters that block unreliable content, they would not trust filters operated by a platform.
Using these insights, the researchers built a Facebook-like prototype platform, called Trustnet. In Trustnet, users post and share actual, full news articles and can follow one another to see content others post. But before a user can post any content in Trustnet, they must rate that content as accurate or inaccurate, or inquire about its veracity, which will be visible to others.
“The reason people share misinformation is usually not because they don’t know what is true and what is false. Rather, at the time of sharing, their attention is misdirected to other things. If you ask them to assess the content before sharing it, it helps them to be more discerning,” she says.
Users can also select trusted individuals whose content assessments they will see. They do this in a private way, in case they follow someone they are connected to socially (perhaps a friend or family member) but whom they would not trust to assess content. The platform also offers filters that let users configure their feed based on how posts have been assessed and by whom.
Once the prototype was complete, they conducted a study in which 14 individuals used the platform for one week. The researchers found that users could effectively assess content, often based on expertise, the content’s source, or by evaluating the logic of an article, despite receiving no training. They were also able to use filters to manage their feeds, though they utilized the filters differently.
“Even in such a small sample, it was interesting to see that not everybody wanted to read their news the same way. Sometimes people wanted to have misinforming posts in their feeds because they saw benefits to it. This points to the fact that this agency is now missing from social media platforms, and it should be given back to users,” she says.
Users did sometimes struggle to assess content when it contained multiple claims, some true and some false, or if a headline and article were disjointed. This shows the need to give users more assessment options — perhaps by stating than an article is true-but-misleading or that it contains a political slant, she says.
Since Trustnet users sometimes struggled to assess articles in which the content did not match the headline, Jahanbakhsh launched another research project to create a browser extension that lets users modify news headlines to be more aligned with the article’s content.
While these results show that users can play a more active role in the fight against misinformation, Jahanbakhsh warns that giving users this power is not a panacea. For one, this approach could create situations where users only see information from like-minded sources. However, filters and structured assessments could be reconfigured to help mitigate that issue, she says.
In addition to exploring Trustnet enhancements, Jahanbakhsh wants to study methods that could encourage people to read content assessments from those with differing viewpoints, perhaps through gamification. And because social media platforms may be reluctant to make changes, she is also developing techniques that enable users to post and view content assessments through normal web browsing, instead of on a platform.
This work was supported, in part, by the National Science Foundation.
Business & Economy9 months ago
NSE Academy Limited collaborates with HDFC Mutual Fund for financial awareness program
Edu News9 months ago
Technique protects privacy when making online recommendations
Business & Economy6 months ago
Using artificial intelligence to control digital manufacturing
Edu News8 months ago
Stronger security for smart devices
Edu News8 months ago
Astronomers discover a multiplanet system nearby
Edu News9 months ago
Search reveals eight new sources of black hole echoes
Edu News7 months ago
Jasudben ML School celebrated its first edition of Pride Month
Edu News9 months ago
Unpacking black-box models