Implementing Biometric Login in React Native: A Comprehensive Guide for iOS
Biometric authentication has become an essential feature for mobile applications, providing users with a convenient and secure way to access their accounts. With biometrics, users can authenticate using Face ID, Touch ID, or fallback to device passcodes. This guide explains how to implement biometric login in a React Native application by bridging native iOS code with your React Native app.
Why Biometric Authentication?
In today’s digital landscape, security and user experience are paramount. Biometric authentication offers:
Quick and seamless login experience.
Enhanced security compared to traditional password methods.
Support for multiple authentication types (Face ID, Touch ID, device credentials).
What You’ll Learn
How to check biometric authentication availability on iOS devices.
How to implement biometric authentication with a fallback to device credentials.
How to bridge native iOS code with React Native.
How to use the functionality in your React Native app.
Step 1: Permissions Required for Biometric Authentication:
iOS requires permission and configuration in your app to access biometric features. Update your app’s Info.plist file to include the following keys:
<key>NSFaceIDUsageDescription</key>
<string>We use Face ID to authenticate you securely.</string>
<key>NSBiometricUsageDescription</key>
<string>We use biometric authentication to enhance your security.</string>
These keys ensure that your app requests permission from the user to use Face ID or Touch ID.
We need to verify whether biometric authentication is supported on the device. This is done using the LAContext class from Apple’s LocalAuthentication framework.
Native Code (iOS)
Create a method in your native module to check biometric authentication availability: import LocalAuthentication
<Button title=”Login with Biometrics” onPress={handleLogin} />
<Text>{authStatus}</Text>
</View>
);
};
export default App;
Next Steps
Implement credential storage using Keychain for enhanced security.
Add comprehensive error handling and user feedback.
Expand the feature to support additional scenarios (e.g., two-factor authentication).
Conclusion
With this guide, you’ve added biometric authentication to your React Native app on iOS, offering users a secure and seamless login experience. Follow similar steps for Android to ensure feature parity across platforms.
Implementing Biometric Login in React Native: A Comprehensive Guide
Biometric authentication has become an essential feature for mobile applications, providing users with a convenient and secure way to access their accounts.
Biometric login offers a seamless and secure user authentication experience, allowing users to access their accounts with fingerprint, face recognition, or device credentials like PIN or pattern. In this blog post, we’ll walk through implementing biometric login in a React Native application using native code and bridging it with React Native application using Android’s BiometricPrompt API.
Why Biometric Authentication?
In today’s digital landscape, security and user experience are paramount. Biometric authentication offers:
Quick and seamless login experience
Enhanced security compared to traditional password methods
Support for multiple authentication types (fingerprint, face recognition, device credentials)
What You’ll Learn
How to check biometric authentication availability on the device.
How to implement biometric authentication with fallback to device credentials.
How to bridge native code with React Native.
How to use the functionality in your React Native app.
Step 1: Permissions Required for Biometric Authentication
To implement biometric authentication in your React Native app, you need to declare specific permissions in the Android AndroidManifest.xml file. These permissions ensure your app can access and use the device’s biometric features, such as fingerprint or face recognition.
Add the following permissions to your AndroidManifest.xml file:
This method checks if biometric or device credential authentication is supported and returns a boolean value.
Step 3: Implementing Biometric Authentication
Next, we create a method to authenticate users using biometrics. If biometrics aren’t available, we fallback to device credentials (PIN, pattern, etc.). Native Code (Android)
@ReactMethod
public void authenticateWithBiometric(Promise promise) {
FragmentActivity activity = (FragmentActivity) getCurrentActivity();
if (activity == null) {
promise.reject("NO_ACTIVITY", "No activity found");
return;
}
BiometricManager biometricManager = BiometricManager.from(activity);
int canAuthenticateWithBiometric = biometricManager.canAuthenticate(
BiometricManager.Authenticators.BIOMETRIC_WEAK
);
int canAuthenticateWithDeviceCredential = biometricManager.canAuthenticate(
BiometricManager.Authenticators.DEVICE_CREDENTIAL
);
if (canAuthenticateWithBiometric != BiometricManager.BIOMETRIC_SUCCESS &&
canAuthenticateWithDeviceCredential != BiometricManager.BIOMETRIC_SUCCESS) {
promise.reject("AUTH_NOT_AVAILABLE", "No authentication methods available");
return;
}
executor = ContextCompat.getMainExecutor(activity);
final int[] attemptCounter = {0};
biometricPrompt = new BiometricPrompt(activity, executor, new BiometricPrompt.AuthenticationCallback() {
@Override
public void onAuthenticationError(int errorCode, @NonNull CharSequence errString) {
promise.reject("AUTH_ERROR", errString.toString());
}
@Override
public void onAuthenticationSucceeded(@NonNull BiometricPrompt.AuthenticationResult result) {
promise.resolve("AUTH_SUCCESS");
}
@Override
public void onAuthenticationFailed() {
attemptCounter[0]++;
if (attemptCounter[0] >= 3) {
promise.reject("AUTH_FAILED", "Authentication failed after 3 attempts");
biometricPrompt.cancelAuthentication();
}
}
});
int allowedAuthenticators = (canAuthenticateWithBiometric == BiometricManager.BIOMETRIC_SUCCESS) ?
BiometricManager.Authenticators.BIOMETRIC_WEAK | BiometricManager.Authenticators.DEVICE_CREDENTIAL :
BiometricManager.Authenticators.DEVICE_CREDENTIAL;
try {
BiometricPrompt.PromptInfo promptInfo = new BiometricPrompt.PromptInfo.Builder()
.setTitle("Unlock to login")
.setSubtitle("Just one glance or touch, and you're in!")
.setAllowedAuthenticators(allowedAuthenticators)
.build();
activity.runOnUiThread(() -> biometricPrompt.authenticate(promptInfo));
} catch (Exception e) {
promise.reject("AUTH_ERROR", "Error building prompt: " + e.getMessage());
}
}
This method:
Displays the biometric prompt to the user.
Authenticates the user with biometrics or device credentials.
Handles success, error, and failed attempts.
Step 4: Bridging Native Code with React Native
We need to expose the native methods to React Native using a custom native module.
Native Code: NativeBridge
public class NativeBridgePackage implements ReactPackage {
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Collections.emptyList();
}
@Override
public List<NativeModule> createNativeModules(ReactApplicationContext reactContext) {
List<NativeModule> modules = new ArrayList<>();
modules.add(new NativeBridge(reactContext));
return modules;
}
}
With the implementation above, you’ve added biometric authentication to your React Native app, providing users with a secure and user-friendly login experience. This guide can serve as a template for enhancing the security features of your app.
Let us know your thoughts or share your challenges in the comments below! 🚀
Search engines play a critical role in web and software applications by providing the ability to efficiently retrieve and display data. Depending on the complexity and size of your data, as well as the language or framework you’re using, there are several search engine solutions to choose from. Below is a comprehensive overview of search engines and their use across various coding languages, focusing on TNTSearch, Elasticsearch, and a few others across different programming environments.
1. TNTSearch
TNTSearch is a fast, in-memory search engine typically used in PHP applications and works seamlessly with Laravel via Laravel Scout. It’s lightweight and ideal for small to medium-sized datasets.
Use Cases
PHP / Laravel: TNTSearch integrates directly into Laravel applications, especially through Laravel Scout. It’s great for applications where the dataset is moderate, and search speed is important without needing a separate service.
Pros:
Easy to integrate, particularly with Laravel.
Great for real-time, in-memory searches.
Automatic indexing with minimal setup.
Cons
Struggles with larger datasets.
Basic search capabilities; not suitable for complex queries.
Languages:
PHP: Mainly used with Laravel applications.
JavaScript: Can be used in combination with search libraries or as part of backend services that handle the logic.
Example in PHP with Laravel Scout:
2. Elasticsearch
Elasticsearch is one of the most popular full-text search engines and is designed to handle distributed search workloads. It’s highly scalable and can process large amounts of data. Elasticsearch is used across a variety of languages and frameworks due to its advanced search capabilities, flexibility, and ability to handle real-time indexing.
Use Cases:
a. Large-scale applications requiring complex full-text search capabilities.
b. Applications that need to perform advanced filtering, ranking, or faceted search (e.g., eCommerce or enterprise-level apps).
Pros:
Highly scalable for large datasets.
Supports complex, real-time queries and advanced features.
Open-source with a large community and support ecosystem.
Cons:
Requires significant setup and maintenance (e.g., server management).
More resource-intensive than lightweight solutions like TNTSearch.
Languages:
a). JavaScript (Node.js): Commonly used for backend search services.
b). Python: Elasticsearch is used in data analytics and scientific research tools.
c). Ruby: Used for search in Ruby on Rails applications.
d). Java: Elasticsearch itself is written in Java, so it has deep integration with the Java ecosystem.
Example in JavaScript (Node.js):
3. Solr
Solr is another robust search engine built on top of Apache Lucene, and it’s comparable to Elasticsearch in terms of scalability and full-text search capabilities. It has a solid footing in enterprise-level applications and is often used in large-scale deployments that require extensive indexing and querying capabilities.
Use Cases:
a. Enterprise search applications.
b. Websites requiring advanced filtering and faceted search (e.g., eCommerce, document search engines).
Pros:
Extremely scalable and reliable.
Has faceted search capabilities and is highly configurable.
Open-source, with support for both distributed and non-distributed search.
Cons:
Complex to set up and manage, similar to Elasticsearch.
Requires dedicated resources for optimal performance.
Languages:
Java: Solr is built in Java and integrates easily with Java-based applications.
Python: Popular in data-centric applications.
PHP / Symfony: Integrates well with PHP frameworks, though setup is more complex than with Elasticsearch.
Example in Java:
4. Sphinx
Sphinx is an open-source full-text search engine designed for indexing large volumes of text and offering fast searching capabilities. It’s widely used for web-based applications and can index databases or text files. Sphinx is known for being highly efficient, lightweight, and offering scalability for large datasets.
Use Cases:
a. Websites with a high volume of content, such as news portals or forums.
b. Applications that need fast and efficient search indexing for text-heavy data.
Pros:
High-performance, full-text search engine with low resource requirements.
Supports distributed searching and indexing.
Easy to integrate with SQL databases like MySQL and PostgreSQL.
Cons:
Limited advanced search features compared to Elasticsearch and Solr.
No built-in support for non-text data or analytics.
Languages:
PHP: Sphinx integrates well with PHP-based applications through its MySQL protocol.
Python: Used in web applications for quick search indexing.
Ruby: Offers support for Ruby on Rails through third-party libraries.
5. Whoosh
Whoosh is a fast, lightweight search engine library written in Python. It is designed for smaller applications where search needs are minimal or moderate. Whoosh provides full-text indexing and search capabilities without the need for an external server, making it suitable for local applications or development environments.
Use Cases:
a.Desktop or lightweight web applications.
b. Projects where simplicity and ease of use are a priority.
c. Educational tools and smaller search applications.
Pros:
Written entirely in Python, making it easy to integrate into Python applications.
Lightweight and doesn’t require running a separate server.
Easy to set up and use for small-to-medium-sized projects.
Cons:
Not suitable for large-scale applications or distributed search.
Limited scalability and performance compared to other engines like Elasticsearch or Solr.
Languages:
Python: Exclusively used with Python applications, especially for small-scale search functionalities.
Example in Python:
6. Xapian
Xapian is an open-source search engine library that provides full-text search functionality. It’s known for its flexibility and simplicity and is often used for embedding search features within applications. Xapian supports a range of programming languages and can be integrated into various applications with ease.
Use Cases:
a. Embedding search functionality in existing applications or services.
b. Suitable for medium to large datasets that require fast searching.
Pros:
Supports advanced indexing and search features like probabilistic ranking.
Multi-language support and bindings for several programming languages.
Provides both Boolean and probabilistic search models.
Cons:
Steeper learning curve for advanced functionalities.
Not as feature-rich for enterprise-level applications as Elasticsearch or Solr.
Languages:
C++: Core library written in C++, offering fast performance.
Python: Commonly used in Python applications via the Xapian bindings.
PHP: Integrates well with PHP through native extensions.
Example in Python:
7. MeiliSearch
MeiliSearch is a modern, powerful, and open-source search engine built with simplicity and performance in mind. It’s designed for applications where speed, relevance, and customization are critical. MeiliSearch is known for its low latency and real-time indexing capabilities, making it a great option for dynamic applications.
Use Cases:
a. Real-time search for web applications or mobile apps.
b. Projects that need lightning-fast search responses with custom ranking options.
Pros:
Extremely fast and responsive, with support for real-time indexing.
Provides customizable ranking algorithms.
Simple to set up and easy to integrate into various environments.
Cons:
Still evolving and not as mature as Elasticsearch or Solr.
Lacks some advanced analytics and distributed search features.
Languages:
JavaScript (Node.js): MeiliSearch provides an official JavaScript SDK for easy integration with web applications.
Ruby: Can be used with Ruby on Rails applications for fast search features.
PHP: Supported through community-maintained libraries for Laravel and other PHP frameworks.
Example in JavaScript (Node.js):
8. Typesense
Typesense is an open-source search engine optimized for speed and ease of use. It’s designed to handle typo tolerance and fast queries, making it ideal for user-facing applications like eCommerce, documentation sites, or dashboards. Typesense is developer-friendly, offering instant search and autocomplete features out of the box.
Use Cases:
a. ECommerce websites with search and filtering options.
b. User-facing applications where search speed is critical.
Pros:
Provides typo tolerance and instant search out of the box.
Developer-friendly, with simple APIs for various programming languages.
Designed for real-time, fast performance.
Cons:
Limited to specific use cases, not as customizable as Solr or Elasticsearch.
Doesn’t handle extremely large datasets as efficiently as other search engines.
Languages:
JavaScript (Node.js): Official SDK for integrating Typesense into web applications.
Python: Python support for search-based applications and data analysis.
Ruby: Ruby SDK available for Rails applications with fast search requirements.
Example in JavaScript (Node.js):
Conclusion
Search engines come in various forms, each suited to specific needs depending on the size, complexity, and performance requirements of your application. Whether you’re building small to medium-scale applications with TNTSearch or looking for large-scale distributed solutions with Elasticsearch and Solr, there’s a search engine for every programming environment.
Choosing the right search engine largely depends on your application’s size, the type of data you need to index, and the complexity of your search requirements. Additionally, developer resources and ease of integration into existing environments are also key considerations when selecting the appropriate solution for your needs.
MongoDB is a document-oriented, NoSQL database widely used for modern application development. It stores data in flexible, JSON-like documents, meaning fields can vary from document to document, and data structure can change over time. Its scalability, performance, and ease of use make it an ideal choice for handling large datasets and real-time data analytics.
MongoDB was designed to address the limitations of traditional relational databases. It is known for being schema-less, providing high availability, and allowing for horizontal scaling. Instead of storing data in rows and columns like traditional databases (SQL), MongoDB stores data as collections of documents. This makes it highly flexible and capable of handling a wide variety of data types.
What is MongoDB?
MongoDB is a document-oriented NoSQL database designed for scalability, flexibility, and performance. Developed by MongoDB Inc., it was first released in 2009 and has since become a cornerstone of many modern web applications and data-driven systems.
Key Features of MongoDB
1. Document-Oriented Storage
MongoDB uses a flexible schema to store data. It stores data in the form of BSON (Binary JSON), allowing for arrays, nested objects, and other complex data structures within a single document. Unlike traditional SQL databases, MongoDB doesn’t require predefined schemas, meaning that fields can be added, removed, or altered at any time without affecting the existing documents.
2. Scalability
MongoDB supports horizontal scaling through sharding. Sharding allows for distributing data across multiple servers, which improves both storage capacity and performance. MongoDB automatically manages the distribution of data across shards and balances load accordingly.
3. Indexing
To improve query performance, MongoDB supports various types of indexes, such as single field, compound, and geospatial indexes. These indexes help optimize searches within large datasets by quickly locating documents matching a query.
4. High Availability
MongoDB provides high availability through replication. Replica sets consist of two or more copies of data, ensuring data redundancy and failover support. If the primary node fails, the system automatically switches to a secondary node, minimizing downtime.
5. Aggregation Framework
MongoDB offers a powerful aggregation framework, allowing users to perform complex data transformations and analytics. It supports operations like filtering, grouping, sorting, and applying complex calculations, similar to SQL’s GROUP BY or JOIN operations.
6. Load Balancing
MongoDB has built-in load balancing that distributes read and write operations across replica sets, ensuring high throughput and reducing latency. This makes it suitable for handling high-traffic applications.
MongoDB Architecture
MongoDB uses a client-server architecture. The core components include:
Documents: The primary unit of data in MongoDB, represented in BSON format.
Collections: A grouping of documents, analogous to tables in relational databases. Collections don’t enforce schemas, so each document can have different fields.
Databases: A logical container for collections, each with its own set of collections and documents.
Shards: In a sharded cluster, data is distributed across multiple shards to support horizontal scaling.
Replica Sets: A group of MongoDB instances that host the same data. Replica sets provide redundancy and high availability.
Mongos: A routing service for sharded clusters that directs queries to the correct shards.
Setting Up MongoDB
Download MongoDB from the official website.
Install MongoDB following the instructions for your operating system.
MongoDB vs Redis: A Comprehensive Comparison for Optimization, Speed, Scalability, and Performance
When it comes to choosing a database for modern applications, two of the most commonly compared technologies are MongoDB and Redis. Both are highly regarded NoSQL databases that serve different use cases based on factors such as optimization, speed, scalability, and performance. This article provides a detailed comparison between MongoDB and Redis, helping developers and businesses decide which database suits their specific needs.
What is Redis?
Redis (Remote Dictionary Server) is an in-memory data structure store, often used as a key-value database, cache, and message broker. It supports different types of data structures like strings, lists, sets, and hashes. Redis is renowned for its lightning-fast speed since it primarily operates in-memory and offers advanced features like persistence, replication, and Lua scripting.
Type of Database
MongoDB: A document-oriented NoSQL database that stores data in BSON (Binary JSON). It is designed for handling large volumes of unstructured or semi-structured data.
Redis: An in-memory key-value store and cache that also supports other data structures like lists, sets, and hashes.
Speed and Performance
MongoDB: Slower compared to Redis for read-heavy operations because MongoDB writes data to disk. However, MongoDB performs well with large datasets, especially when combined with indexes.
Redis: Extremely fast because it operates entirely in memory, providing sub-millisecond latency. This makes Redis ideal for real-time applications like caching and session management.
Optimization
MongoDB: Optimized for large-scale document storage and retrieval. It supports rich queries, complex aggregations, and offers flexibility for schema changes. Great for handling complex data models.
Redis: Optimized for low-latency, high-throughput operations. It can be used for caching frequently accessed data, reducing load on a primary database. Redis also supports persistence with optional configuration for performance tuning.
Scalability
MongoDB: Built for horizontal scaling via sharding, which distributes data across multiple servers. This allows MongoDB to handle large-scale applications with ease, supporting both high availability and distributed workloads.
Redis: Supports horizontal scaling through clustering, where data is split across multiple Redis nodes. However, scaling Redis can be more complex because it stores everything in memory, meaning memory management is critical.
Data Persistence and Durability
MongoDB: Persistence is a core feature, as MongoDB stores data on disk by default. It offers high durability with replication and journaling to ensure data integrity in case of crashes or failures.
Redis: Primarily an in-memory database but offers AOF (Append-Only File) and RDB (Redis Database Backup) options for data persistence. While these options
make Redis more durable, it doesn’t match MongoDB’s out-of-the-box durability.
MongoDB vs MySQL: A Comparison
Data Model:
MongoDB: Document-oriented (NoSQL), stores data in BSON format.
MySQL: Relational (SQL-based), uses tables with rows and columns.
Schema:
MongoDB: Flexible and schema-less, allowing dynamic data structures.
MySQL: Fixed, predefined schema with strict data types and structure.
Scalability:
MongoDB: Supports horizontal scaling through sharding, distributing data across multiple servers.
MySQL: Primarily scales vertically (by increasing server resources), with limited support for horizontal scaling.
Joins:
MongoDB: Limited support for joins; typically uses embedded documents and references for relationships.
MySQL: Extensive support for complex joins and relationships between tables.
Use Case:
MongoDB: Ideal for real-time analytics, unstructured data, and flexible data models.
MySQL: Best suited for structured data with complex relationships, where consistency is critical.
Advantages of MongoDB
Flexible Schema: MongoDB’s schema-less nature allows developers to modify data structures without major downtime.
Scalability: Horizontal scaling through sharding enables MongoDB to handle massive datasets efficiently.
Powerful Aggregation Framework: Supports complex data operations and analytics.
High Availability: Replication ensures data redundancy and failover support.
Disadvantages of MongoDB
Performance with Complex Queries: While MongoDB excels in many areas, certain types of complex queries may not perform as well as traditional SQL databases.
Memory Usage: MongoDB can be memory-intensive, especially when handling large datasets without proper indexing.
Limited Transaction Support: Although MongoDB supports multi-document transactions, this feature is relatively new and may not be as mature as in relational databases.
Future Uses of MongoDB
With its ability to handle big data, real-time analytics, and IoT applications, MongoDB’s future is bright. It is widely used in sectors like e-commerce, social media, and healthcare, where fast data processing and scalability are critical. Its continuous development with features like enhanced transactions and better cloud integration ensures MongoDB will remain relevant for future application development.
Conclusion
MongoDB revolutionizes the way developers handle data, offering flexibility, scalability, and high availability for modern applications. While it has some limitations, especially in complex querying, its document-oriented approach, coupled with its horizontal scalability, makes MongoDB an excellent choice for handling dynamic and large-scale datasets. As technology evolves, MongoDB will continue to play a crucial role in shaping the future of data management.
a. Spatial video technology: iPhone 16 features vertically aligned cameras specifically designed for spatial video capture, enabling immersive 3D-like video content.
b. Improved Ultra-Wide camera enabling macro photography.
c. The new vertical layout allows better alignment and collaboration between the lenses for depth perception, creating richer media.
2. New Camera Control Button:
a. Advanced camera controls: The iPhone 16 introduces a dedicated Camera Control button to allow easier and quicker access to camera settings.
b. Streamlined capture: With a single tap, you can switch between different camera modes (photo, video, portrait, night mode, etc.), control exposure, or toggle settings like flash and Live Photos.
c. Improved user experience: Provides quick adjustments during video recording and photo taking without navigating multiple menus.
3. New Action Button :
a. The Action Button replaces the traditional mute switch on the side of the iPhone.
b. Customizable: You can program it to perform various actions, such as launching the camera, starting a voice memo, toggling silent mode, launching an app, or running a shortcut.
c. User-defined: It offers customization through settings, letting you choose the specific action it performs depending on how you press it (e.g., press and hold, double press).
d. Dynamic functionality: Can also be integrated with Focus modes to change actions based on your current focus settings.
4. A18 Chipset:
a. The new A18 Bionic chipset brings a significant boost in both CPU and GPU performance.
b. Faster processing: Improved neural engine capabilities for faster AI and machine learning tasks, such as image recognition and natural language processing.
c. Enhanced efficiency: Better power management leads to improved battery life, particularly for resource-intensive tasks like gaming, 3D rendering, and video editing.
d. Advanced graphics: The A18 introduces advanced graphics rendering, enabling smoother gameplay and better performance for AR/VR applications
5. Performance and Battery Life:
a. Powered by the all-new A18 chip, delivering up to 30% faster CPU and up to 40% faster GPU compared to the A16 Bionic
b. Larger batteries achieved through redesigning the phone’s interior.
c. Significant boost in battery life – up to 22 hours video playback for iPhone 16 and 27 hours for iPhone 16 Plus.
6. Connectivity and features:
a. Support for Wi-Fi 7 connectivity.
b. Bluetooth 5.3 support.
c. Apple Intelligence integration, allowing for personal intelligence features.
7.Apple Intelligence Features (Arrived in October):
a. On-device AI: Apple is rolling out new Apple Intelligence features powered by the A18 chipset in October.
b. Advanced machine learning: These features enhance personalization, app recommendations, and predictive text functionalities.
c. Smarter Siri: Siri will get more powerful with on-device processing, making it faster and more responsive.
d. Enhanced Photos and Camera: AI-driven enhancements in photo editing, real-time adjustments, and object recognition across apps.
Redux : Redux is a state management library for JavaScript applications, commonly used with React. It provides a centralized store that holds the entire application’s state, allowing you to manage and access state consistently across the application.
UseContext : useContext is a React hook that allows components to access and share data across the component tree without the need for props drilling. It works with the Context API, which enables you to create a context and a provider that wraps around parts of your component tree. Components within that subtree can then consume the context directly using useContext, giving them direct access to shared state or data.
Props drilling :
Props drilling is a concept in React where data (props) is passed from a parent component to deeply nested child components. When child components several levels down the component tree need access to the data, you must pass the data through each intermediary component as props, even if those intermediary components don’t actually use the data.
Problems with Props Drilling:
Repetitive Code: Every intermediary component must accept and pass along the props, even if it doesn’t use them.
Maintenance Issues: If you need to add, change, or remove a prop, you must update all components in the path, making the code harder to maintain.
Scalability: As the app grows, props drilling can make it difficult to manage data, especially when data is needed in many parts of the app.
useContext and Redux as Solutions to Props Drilling
Both useContext and Redux help manage global state in React, enabling you to avoid props drilling by providing state to components directly, regardless of their nesting level.
1. useContext
The Context API in React allows you to create a context, which provides data directly to any component that needs it, without needing to pass it down through every level in between.
How it Helps:
With useContext, you can avoid props drilling by wrapping a part of your component tree with a Provider and accessing the data with useContext in any descendant component.
It’s ideal for smaller or medium-sized applications where a piece of data needs to be shared by multiple components, but the app doesn’t require complex state management.
2. Redux
Redux is a state management library that holds the entire application state in a single store. Components can access and update this state directly, which eliminates the need for props drilling across the application.
How it Helps:
Redux provides a global store for state, so components can access and update state directly without passing props.
This makes Redux particularly suitable for larger applications with complex state management needs, as it supports middleware for handling asynchronous actions and has powerful debugging tools.
Both useContext and Redux can help avoid props drilling and make the component tree cleaner and more maintainable. The choice depends on the complexity and scale of your application.
Differences between Redux and useContext
React Native Redux vs. useContext: Main Differences
State Management Style:
Redux: Centralized, single global store. All state is held in one place, and components can access and update it via actions and reducers.
useContext: Decentralized, uses React’s Context API. State is shared between components without requiring a global store, but state is typically scoped to a subtree of components.
Scalability:
Redux: Suitable for larger applications with complex state logic because it offers predictable state management patterns. More structure and tooling (like middlewares) for handling side effects.
useContext: Better for smaller apps or for managing simpler, localized state. It can become challenging to maintain and scale with complex applications due to lack of middleware or action-based state flow.
Boilerplate:
Redux: More boilerplate code (setting up store, reducers, actions). This is often necessary for the stricter pattern but can be cumbersome.
useContext: Less boilerplate; integrates seamlessly into React with hooks. It’s lighter but doesn’t have the strict structure that Redux imposes.
Side Effects Handling:
Redux: Provides support for handling side effects via middleware like redux-thunk or redux-saga.
useContext: No native way to handle side effects. You would need to use other hooks like useReducer or useEffect to manage side effects, which can become complicated as the app grows.
Debugging Tools:
Redux: Redux DevTools provide advanced debugging and state tracking capabilities, making it easier to trace state changes.
useContext: No built-in debugging tools like Redux. State changes are harder to track, especially in larger apps.
Performance:
Redux: Uses selectors to optimize performance by preventing unnecessary re-renders when only specific parts of the state are updated.
useContext: Any context update will cause all consuming components to re-render, which can lead to performance issues in larger apps.
How to Set Up an App for Android TV and Apple TV Using React Native.
Introduction
Smart TVs have revolutionized home entertainment, offering access to streaming, gaming, and interactive apps. With billions of devices in use, the global smart TV market is rapidly expanding, fueling the growth of TV apps like Netflix and Disney+. These apps now cover a broad range of categories, including gaming, fitness, and shopping.
For developers, this surge presents a valuable opportunity. Platforms like Android TV and Apple TV offer robust tools for building apps tailored to large screens and remote navigation. React Native has become a popular choice, enabling cross-platform development with reusable code across both devices.
Importance of React Native for cross-platform TV app development.
React Native plays a critical role in cross-platform TV app development by enabling developers to build apps for both Android TV and Apple TV with a shared codebase. This reduces development time and effort while ensuring consistency across platforms. Its flexibility allows for seamless adaptation to TV-specific requirements, such as remote navigation and UI scaling for larger screens.
Additionally, React Native’s vast ecosystem of libraries and community support enables developers to integrate advanced features like video playback, remote control navigation, and focus management seamlessly. This makes it a powerful tool for delivering high-quality TV apps across platforms, ensuring a consistent user experience.
Prerequisites
Basic knowledge of React Native.
Android Studio for Android TV development.
Xcode for Apple TV (tvOS) development.
Node.js and npm installed on your machine.
React Native CLI or Expo.
Setting Up Your React Native Project
Install React Native using the CLI or Expo: npx react-native init MyTVApp
Adding Support for Android TV and Apple TV (tvOS).
To set up your React Native project for both Android TV and Apple TV, you’ll need to install the react-native-tvos package. In your package.json, update the React Native version to ensure compatibility with TV platforms.
Note: Releases of react-native-tvos will be based on a public release of react-native; e.g. the 0.75.2-0 release of this package will be derived from the 0.75.0 release of react-native. All releases of this will follow the 0.xx.x-y format, where x digits are from a specific RN core release, and y represents the additional versioning from react-native-tvos repo.
This ensures that your project uses the tvOS-compatible version of React Native, enabling support for both Android TV and Apple TV development.
Now that the Android TV setup is complete, let’s move on to the steps for setting up Apple TV.
To set up Apple TV (tvOS), open your Podfile and make the following modifications:
Set the platform for tvOS:
platform :tvos, ‘13.4’
Enable Fabric for tvOS:
:fabric_enabled => true
In the next step, open your Xcode project and update the target settings:
Change the Destination Target: a. Go to the Project Navigator in Xcode. b. Select your project, then navigate to the Targets section. c. Under the General tab, locate Supported Destinations and change the destination target to Apple TV by selecting tvOS.
Remove Other Targets (if applicable): a. In the same Targets section, you can remove any other unnecessary targets by right-clicking and selecting Delete (for platforms like iOS if not needed).
Now, follow these steps to create a new file for the launch screen in your Apple TV (tvOS) project:
Select LaunchScreen:
a. In Xcode’s Project Navigator, select the LaunchScreen.storyboard file.
Create a New File:
a. Right-click on LaunchScreen.storyboard.
b. Click on New File.
Choose the File Type:
a.Select User Interface under the tvOS section.
b.Choose Storyboard and click Next.
Name the File:
Name your new file (e.g., LaunchScreen.storyboard), and click Create.
Now, to adjust the build script settings for your tvOS target in Xcode:
Open Build Settings: a. In Xcode, select your project from the Project Navigator. b. Go to the Targets section and select your Apple TV (tvOS) target.
Search for Build Script: a. Navigate to the Build Settings tab. b. In the search bar at the top right, type “Build Script”.
Set Build Script to NO: a. Locate the ‘Run Build Script Phase in Parallel’ option and change it to NO.
To run your app on the tvOS Simulator, follow these steps:
Open the Scheme Menu: a. In Xcode, locate the scheme menu at the top of the workspace window. It’s usually next to the “Run” button and displays the current scheme and target device.
Select tvOS Simulator: a. Click on the scheme menu to open the drop-down list. b. Under the Destination section, choose tvOS Simulator. c. Select a specific tvOS Simulator device (e.g., Apple TV 4K or Apple TV HD) from the available options.
This will configure Xcode to build and run your app on the selected tvOS Simulator, allowing you to test your Apple TV app.
To run your project, follow these steps:
Open Terminal:
a. Navigate to your project location in the terminal.
Install Dependencies: a. Run yarn or npm install
Navigate to iOS Folder using command: cd ios
Install CocoaPods Dependencies: a. Run the following command to install the iOS dependencies b. pod install
Return to Project Root: a. Go back to the project root directory: cd ..
Start the Development Server: a. Use Yarn or npm to start the development server: b. Run your TV Simulator and AndroidTV Amulator. i. yarn start or ii. npm start This will start the React Native development server, allowing you to run and test your app on the tvOS Simulator or an Apple TV device.
Emerging plant-wearable sensors allow for timely communication with plants to understand their physiological status, including temperature, water status, volatile emissions, and plant growth. They play a crucial role in providing data-driven insights to optimize the growing conditions and prevent potential problems, ultimately resulting in higher yields and improved sustainability. Developing these wearables can revolutionize agriculture and horticulture. However, there are remaining challenges in monitoring the chlorophyll content in plants, which is also an important biomarker for plant health.
Chlorophylls, including chlorophyll a and chlorophyll b, are crucial pigments participating in photosynthesis. In the light-dependent reaction of photosynthesis, chlorophyll absorbs light energy and converts it into chemical energy in the form of adenosine triphosphate and nicotinamide adenine dinucleotide phosphate, which are used to assemble carbohydrate molecules in subsequent steps. The chlorophyll content is directly related to photosynthetic
potential and primary production. Moreover, chlorophyll content is proportional to thylakoid nitrogen and is influenced by plant stress and senescence. Compared with current plant-wearable sensors focusing on leaf humidity, temperature, and volatile organic compounds, leaf chlorophyll content can provide more direct and insightful information on chloroplast development, photosynthetic capacity, leaf nitrogen content, or general plant health status.
Why do we need Monitor for Crops?
Food Security: Ensure stable and sufficient food production.
Stress Management: Detect and respond to both biotic (e.g., pests, diseases) and abiotic (e.g., drought, temperature) stresses.
Yield Improvement: Optimize growing conditions to increase crop yield.
What Are Wearable Crop Sensors?
Wearable crop sensors are devices attached directly to different parts of plants (like leaves, stems, or roots) to monitor various aspects of plant health and growth in real-time.
Key Advantages of Wearable Sensors:
Real-time Monitoring: Provide continuous data on crop health.
Precision: Offer high spatial and temporal resolution compared to remote sensing methods.
Versatility: Can monitor various types of information (nutrient levels, physiological state, environmental conditions).
Traditional rigid sensors have limitations:
Can damage plant tissues
May cause biological rejection
Not ideal for long-term use
Emerging flexible sensors offer solutions:
Better mechanical properties (can bend and stretch)
Improved biocompatibility
Suitable for long-term, continuous monitoring
Impact on Agriculture
Wearable crop sensors, especially flexible ones, are poised to revolutionize agriculture by:
Enabling precise, real-time crop health monitoring
Facilitating early detection of stresses
Supporting data-driven decision-making in farm management
As this technology advances, it promises to play a crucial role in smart agriculture, helping to optimize resource use and improve crop yields in the face of growing global food demands.
Wearable Sensor Technology: –
Flexible and wearable chlorophyll meter capable of long-term plant monitoring plant sensors at best price ranging from Rs. 398.00. It employs a monochromatic LED and a pair of symmetric PDs for incident radiation and measurement of the intensity of the reflected light. The chlorophyll content is calculated based on the relationship between leaf chlorophyll content and spectral reflectance. This meter is 1.5 mm (about 0.06 in) thick and weighs 0.2 g, making it 1000 times lighter than the commercial chlorophyll meter. It can be patched onto the upper epidermis of the leaf tightly and realize long-term monitoring with little negative impact on leaves and plants. The block diagram summarizes the critical components of the meter and the read-out circuit. Based on it, a smartphone-controlled platform is developed for users to conduct measurements and collect data easily. The power consumption of the system is 0.035 W. With our plant-patchable chlorophyll meter, the leaf chlorophyll content can be measured more accurately and precisely (r² > 0.9) than the SPAD meter. Moreover, during long-term monitoring (over 2 weeks), chlorophyll losses due to abnormal physiological activities of plants can be detected earlier than the SPAD meter and naked-eye observation of yellowing.
Overview of plant-patchable chlorophyll meter based on reflective optics. A) Schematics of the working mechanism of the patchable chlorophyll meter. B) Explosive view and photograph of the patchable chlorophyll meter. C) Photograph of the wearable chlorophyll meter patched on the leaf. D) System block diagram of device operation. E) Wireless and portable platform based on smartphone for rapid and convenient measurements and data collection. FFC, flexible flat cable. F) The advantages of the patchable chlorophyll meter for early detection of plant stresses over naked-eye observation and commercial SPAD meter. A key molecule for photosynthesis and plant growth, chlorophyll is an important target to monitor as, in simple terms, it collects light to convert water and carbon dioxide into sugar for energy and to build new plant structural components.
Compared with non-contact monitoring methods, wearable sensor has higher time resolution and spatial resolution, which use a mechanical clamping method to fix the sensor on the crop, and directly monitors the growth and growth microenvironment of the crop
How wearable plant sensors help? Plant growth is accompanied by many intricate and delicate processes, including photosynthesis, transpiration, and respiration. Also, plants are susceptible to several additional environmental factors. Plant growth is negatively impacted when environmentally harmful substances meet the plant. Visual examination and soil testing, which are traditional methods of crop monitoring, would not be able to immediately identify small changes in plant health and biotic stresses that the crops are experiencing in the early phases. Furthermore, time-consuming and tedious traditional methods of plant health assessment are used.
Wearable Sensors for the Measurement of Plant Phenotypes
Traditional plant phenotyping methods are constrained in spatial resolution and accuracy due to their noncontact measurement mode. However, the rapid development of wearable sensors, which features high spatial resolution, multifunctionality, and minimal invasiveness, provides a suitable tool for measuring plant phenotypes. In this section, we review the progress of wearable sensors in measuring plant phenotypes, such as elongation, leaf temperature, hydration, bioelectric potential, and stress response,
Elongation
Elongation is an accurate indicator of plant growth, which aids in understanding the plant growth rhythm and response to environmental conditions. The typical optical phenotyping method for measuring elongation is time-lapse imaging, which enables noninvasive and continuous monitoring. However, this method has limitations, as the optical path can be easily blocked by other growing branches or leaves. However, wearable sensors distributed on the surface of plants allow for in-situ monitoring of tensile strain, which can be converted to plant elongation. Nevertheless, the contact measurement mode requires wearable sensors to have sufficient stretchability to adapt to the continuous growth of plant organs, so that they will not break or restrict the growth of plants.
To achieve high stretchability, materials and manufacturing techniques are critical. A stretchable strain sensor that uses flexible, stretchable, and biocompatible materials to monitor plant elongation. a thin Ti/Au metal film was deposited on a stretchable substrate polydimethylsiloxane (PDMS) as a strain sensing material. To eliminate the influence of moisture on resistance, the sensor was encapsulated by another hydrophobic PDMS layer. Notably, the researchers also implemented a buckling technique in which the PDMS layer was pretrained to improve the stretchability of the sensor to 35%. Finally, the sensor showed a linear detection range of 0% to 22% strain, corresponding to an elongation range of 0 to 3.75 mm (about 0.15 in). The sensor’s gauge factor was 3.9, sufficient to monitor the micrometer elongations of plant growth. The strain sensor was anchored on barley stem to measure growth, and the response of the sensor to plant growth was plotted. In the growth period of 2 h and 35 min, the total strain detected was 1.6%, which corresponded to a leaf elongation of 284.7 μm.
Another approach to improving stretchability is embedding conductive materials into elastic polymer composites. A direct-written flexible sensor by mixing graphite powder and chitosan solution in a certain proportion. The resulting stretchable flexible sensor could be directly brushed onto the desired position. To prevent interference from humidity, the sensor was sealed with rubber pieces. Experimental results showed that the sensor could reach a maximum strain of 60%. The sensors were directly written on 2 cucumber fruits in groups A and B to monitor their elongation. The resistance of the sensor in group A continuously increased as the fruit grew. In group B, the resistance of the sensor first increased but then decreased. This transition occurred when the entire fruit was disconnected from the stem, indicating that the fruit stopped growing and started shrinking after being cut.
Latex, a type of stretchable polymer, can provide excellent stretchability for wearable sensors in plant phenotyping. A stretchable latex substrate was coated with graphite ink and carbon nanotube ink to enhance the sensor’s stretchability and gauge factor to 150% and 352, respectively. The resulting sensor was mounted on a Cucurbita pepo fruit for circumferential elongation monitoring. The high sensitivity and temporal resolution of the sensor enabled it to discover an interesting phenomenon: the growth of the Cucurbita pepo follows a rhythmic pattern.The diameter of the pepo increased by 12 μm in 70 s, with a growth period of 10 s and a stagnating period of 10 s, alternately. This strain sensor demonstrated the capability of dynamically measuring elongation at the micrometer scale.
Leaf temperature
There are marked differences between plant leaf temperature and air temperature. Monitoring the leaf temperature and analyzing the temperature difference between the leaf and the air can help determine whether plants are under water stress. Unlike traditional infrared thermal imaging methods, wearable sensors are minimally affected by environmental factors. In leaf temperature measurement research, much effort has been focused on the data transmission of wearable sensors.
Wireless communication is widely used in agricultural applications due to its convenience and low cost. Daskalakis et al. proposed a tag-sensor node for leaf temperature measurement based on the wireless backscattering principle, which transmits data through an incident radio-frequency signal without requiring a battery or power source. The sensors were fabricated using low-cost inkjet-printing technology with nanoparticle inks and silver epoxy. The study employed a “clothespin” scheme, placing 2 sensors on the top and back of a leaf, respectively, to measure air temperature and leaf temperature. The communication part of the sensor exploited backscatter Morse code modulation on an 868-MHz carrier emitter signal. A Morse code symbol corresponded to a value of air temperature. For example, the Morse symbol corresponded to 28 °C.
Hydration
Water content and water movement are crucial factors in plant growth. In addition to measuring leaf temperature to indirectly reflect whether plants are subjected to water stress, direct measurement of plant hydration is another option. Traditional phenotyping methods for monitoring plant water content include thermal imaging and terahertz imaging, which require laboratory settings. Wearable sensors offer a solution for in-field measurement of plant hydration, but the interface between the sensor and the plant must be robust to accurately acquire hydration information.
One strategy is to use a clamp. In 2012, Atherton et al. proposed a microfabricated thermal sensor device with a thin film microheater for analyzing the moisture content of leaves by monitoring thermal resistance. The sensor was fixed to the leaf using a plastic clamp. Oren et al.also used the clamp strategy, proposing a multiplex graphene oxide (GO)-based relative humidity (RH) sensor to track water transport inside maize plants. The sensor was adhered to the bottom of a 1-mm-deep chamber in acrylic glass, which was fixed onto the leaf’s surface using lightweight plastic clamping slabs and screws. The disadvantage of this clamp strategy is the relatively complicated installation process, and the mechanical compressive force that may damage the clamped plant organs.
A more convenient and plant-friendly strategy is to use adhesive tape that is nontoxic, although this approach is only viable for wearable sensors with high flexibility. Otherwise, they cannot intimately fit the plant epidermis. Many efforts have been devoted to fabricating flexible hydration sensors. A plant drought sensor is used based on a polyimide (PI) film to monitor the moisture status of tobacco plants. The plant drought sensor was formed by depositing Ti/Au electrodes onto a flexible PI film, which acted as both the sensing element and supporting substrate. The sensor was then peeled from the glass and transferred to a 1-side sticky polyethylene terephthalate film with high flexibility, which facilitated its installation on the plant. The structure of the plant drought sensor,the sensor attached to the lower surface of a Nicotiana tabacum leaf. The moisture released by the transpiration of the leaf increased the capacitance of the PI film, and monitoring the capacitance could therefore deduce the hydration status of the plant. The response of the plant drought sensor capacitance over time during a measurement period, where watering occurred every 6 d, and the capacitance value rapidly increased after each watering.
Bioelectric potentials
Bioelectric potentials are vital for regulating life activities in plants and can change rapidly in response to external stimuli. The conventional method of measuring bioelectric potential involves inserting hard electrodes into tissues, which can cause damage to plants. The use of flexible electrode sensors as a minimally invasive phenotyping tool allows for direct attachment to the plant’s surface to measure bioelectric potentials, causing minimal damage to the plant and enabling continuous measurement.
Analogous to the measurement of plant hydration, in order to accurately monitor the bioelectric potentials, it is necessary to ensure that the flexible electrode is tightly integrated with the leaf. However, different plants have varying epidermal structures according to plant physiology, making surface attachment of the flexible electrode and plant different. For plants with smooth skins such as Opuntia and Aloe, Ochiai et al. attached a boron-doped diamond (BDD) electrode sensor to a piece of green phloem tissue to monitor bioelectric potentials. Metal electrode (Pt and Ag) sensors were also characterized for comparison. The BDD sensor could detect obvious changes in bioelectric potentials when a finger touched the hybrid surface of Opuntia or when environmental factors such as temperature and humidity changed. The measurement could be continued for 7 d, indicating the long-term monitoring capability of the BDD sensor. Although the sensitivity of the BDD sensor was 5 to 10 times higher than that of the metal sensors, the signal stability was unsatisfactory.
Stress response
Plants are frequently exposed to biotic or abiotic stresses, such as pathogen infections, ultraviolet, and ground-level ozone, which can hinder plant growth and alter some physiological characteristics. It is crucial to measure the plant’s stress response at an early stage and take timely intervention. Traditional phenotyping methods for measuring stress response are based on visual identification, but these methods may not detect early-stage stress responses. Wearable sensors offer a potential solution to this problem, enabling real-time monitoring and prompt intervention.
Phytophthora infestans (P. infestans) is responsible for causing plant late blight, a destructive disease that affects various plants, including tomato and potato. The infected plants usually emit volatile organic compounds (VOC) gases, such as aldehydes, during the early stage. The use of a gas sensor array attached to leaves for the early-stage identification of late blight caused by P. infestans. The sensor array consists of gold nanoparticles (AuNPs) decorated with reduced GO (rGO) and silver nanowire (AgNW) acting as the sensing layer and electrode, respectively. The sensing layer can form reversible interactions with plant VOCs by hydrogen or halogen bonds, resulting in a resistance increase of the sensor. The sensor array was attached to a tomato leaf using double-sided tape. After 15 h of stable sensor response, the whole plant was sprayed with a suspension of infectious P. infestans sporangia. Small fluctuations in the signal were observed during the first 35 h. A marked increase was observed at 100 h, indicating the emission of characteristic VOC gases induced by the propagation of P. infestans infection. Notably, 2 watering events at 25 and 35 h induced negligible signal interference. After 115 h, the signals gradually stabilized, indicating that the tomato leaf was completely infected by P. infestans. It is worth mentioning that at 115 h, typical symptoms of late blight, including water-soaked lesions and circular gray spots, started to become visible on the leaves. The results confirm the potential of the sensor array for the identification of VOCs during the early stage of P. infestans infection.
Wearable Sensors for Plant Environment Monitoring
The environment is 1 of 2 crucial factors determining plant phenotypes, making the monitoring of the environment an essential aspect of plant phenotyping. Optical methods, including machine vision, spectroscopy, and aerial vehicle, are conventional techniques for monitoring the environment around plants, providing large area coverage. However, these methods are limited for detecting the microenvironment that directly affects plant growth. In contrast, wearable sensors with contact measurement mode can closely adhere to the surface of plants, sensing real-time changes in the microenvironment. This section reviews the progress of wearable sensors for monitoring the environment, including air temperature, air moisture, light, pesticides, and toxic gas, Notably, multimodal sensors are typically integrated to simultaneously monitor these environmental factors.
Air temperature
Air temperature can have a marked impact on photosynthesis, which is a vital process for producing energy and sugar for plant growth. Inadequate or excessive temperature levels can hinder the healthy development of plants.
A wearable device that integrates temperature and humidity sensors, which can be deployed on plant surfaces. The flexible sensory platform was fabricated using traditional Si-based microfabrication technology. Electrodes made of ultralight butterfly-shaped PI were sputtered with Au. Among these, the serpentine Au pattern acted as the temperature sensor, as the resistance of Au increases with temperature (0.032 Ω/°C). The sensory platform was placed on the surface of Scindapsus aureus leaves and connected to data acquisition and transmission circuits using ultralight electrical wires and silver epoxy. The developed flexible sensory platform monitored the real-time air temperature around the plant. To confirm the sensing performance of the temperature sensor, the data generated by the system was compared with the data collected by a commercial sensor. As the temperature increased (read by the commercial temperature sensor), the resistance of the developed temperature sensor increased synchronously. The results demonstrated the good reliability of the fabricated sensor.
Multifunctionality is a key advantage of plant wearable sensors. A lightweight and stretchable sensor, capable of monitoring multiple plant phenotypes (elongation and hydration) and environmental factors (air temperature and light). The entire sensor weighs only 17 mg and has a large stretchability of 120%, facilitated by a self-similar serpentine design. These features minimize interference with the growth of the host leaf. The temperature sensing element utilizes a Cu layer with a meander pattern. The sensor was installed on a corn leaf outdoors to monitor real-time air temperature. The recorded temperature data was consistent with data obtained using a thermal imaging camera.
Air moisture
Air humidity is a crucial factor that affects stomatal opening and closing, thereby regulating the plant’s transpiration rate, which controls water absorption and mineral nutrition transport. The moisture in the air also has a direct impact on plant health. If the humidity is too low, plant leaves tend to wilt and detach to conserve water, impeding plant growth. Conversely, if the humidity is too high, plants are vulnerable to insect infestations and foliar and root diseases.
The ultralight butterfly-shaped flexible multisensory platform also includes a humidity sensor with an interdigital shape. In this case, PI serves as the humidity sensing element, and its capacitance increases with humidity, displaying a high sensitivity of 1.6/% RH. When installed on a plant leaf for real-time environmental monitoring, the data collected from the fabricated humidity sensor over 2 periods was consistent with that obtained from a commercial humidity sensor.
The multimodal flexible sensor system depicted had 2 humidity sensors, both of which were fabricated by generating interdigital LIG electrodes on the PI substrate through laser scanning. The humidity sensing element for both sensors was ZnIn2S4 nanosheets deposited on the LIG electrodes. One sensor was exposed to the atmosphere for the measurement of air humidity (room humidity), while the other was attached directly to the lower epidermis of a P. macrocarpa leaf for the measurement of leaf humidity. During the plant’s growth, the air humidity was maintained at a constant level, and the light was periodically switched on and off. The data recorded by the smaller humidity sensor (first row of confirmed the constant level of air humidity, while the data measured by the larger humidity sensor indicated that leaf humidity rapidly increased when the light was on, and stomata opened for photosynthesis. Conversely, the leaf humidity decreased when the light source was turned off.
Light
Light is one of the most important environmental factors for plants. On one hand, light is indispensable for photosynthesis, while on the other hand, excessive light can cause physical damage to plants, such as leaf burning. Therefore, monitoring the light intensity in the environment is crucial.
In the previously mentioned stretchable multimodal sensor, a silicon-based phototransistor was used for light sensing. To improve flexibility and reduce weight, the phototransistor was mechanically polished to a thickness of 20 μm. During real-time monitoring of a corn leaf outdoors, the phototransistor detected the light attenuation during sunset, and the measurement result was consistent with that measured by a commercial illuminometer.
The multimodal flexible sensor system illustrated featured an optical sensor, which was fabricated by screen-printing Ag electrodes onto the PI substrate and depositing ZnIn2S4 nanosheets onto the Ag electrodes as the light sensing element. The optical sensor exhibited a fast response time of approximately 4 ms and could detect light illumination at a frequency of 50 Hz. To simulate day and night, an artificial light source
(18 W) was automatically switched on and off every 12 h, and the switching was accurately detected by the wearable sensor.
Pesticide
Pesticides are widely used in agriculture to protect plants from insect pests. However, they can also leave behind residues that can affect plant phenotypes. Current methods for detecting pesticide residues include mass spectrometry, high-performance liquid chromatography, and gas chromatography. However, these methods require expensive equipment and are not suitable for in-situ detection.
Wearable sensors have been utilized to detect pesticide residues on plants.A wearable sensor that can be directly attached to the plant surface for in-situ detection of organophosphorus pesticides. The fabrication process of the sensor. Serpentine 3-electrode LIG was synthesized on a PI film and transferred to PDMS. The prepared LIG electrodes on the PDMS substrate had good flexibility and stretchability, which can well adapt to the irregular surface of plants. Then, the LIG-based electrodes were modified with organophosphorus hydrolase and AuNPs to enhance the electrochemical performance. The sensor was affixed to the surface of a spinach leaf for in-situ detection. When methyl parathion solution was sprayed onto the leaf surface, the sensor acquired real-time information on pesticide residues and displayed it on a smartphone. A clear peak of p-nitrophenol was observed when the methyl parathion was present compared to the control experiment.
Toxic gas
Toxic gases in the environment, even in small amounts, can cause irreversible damage to plants. Current detection of these gases mainly relies on gas chromatography, which is a costly and time-consuming process. Furthermore, it can be challenging to collect gas samples in the field where airflow disturbance frequently occurs. Wearable sensors can provide a solution to these challenges by performing in-situ measurements of toxic gases.
A gas sensor array based on SWCNT channels and graphite electrodes was used to detect the simulants of sarin nerve agent, dimethyl methylphosphonate (DMMP), which can interfere with the photosynthetic process of plants. The gas sensor array consisted of 9 field-effect sensors. The resistance of the SWCNT channels with openings around them could be modulated by the molecules adsorbed on the surface of the SWCNT donating or withdrawing electrons. Additionally, the gas sensor array exhibited good adhesion and could be easily transferred to planar and nonplanar surfaces. The array was transferred to the leaf surface of a lucky bamboo to sense DMMP gas. When DMMP gas was present, the sensor responded within 5 s, and the response intensity increased with the DMMP concentration.
Another toxic gas, nitrogen dioxide (NO2), can cause plant wilt and leaf yellowing. A sprayed gas sensor array was developed using metallic SWCNTs as the conductive electrode and AgNPs/rGO as the sensing element. The sensor was directly sprayed onto the leaves of living plants for in-situ detection. When NO2 was exerted onto the plant, the sensor’s resistance rapidly increased, and this response was reversible after NO2 was replaced by dry air. As the concentration of NO2 increased, the response of the sensor also increased. The limit of detection is as low as 0.5 ppm. The sprayed sensor has better detection performance compared to conventional metal electrode-based sensors, demonstrating its great potential in the in-situ detection of NO2 around plants.
Challenges and Perspectives
Wearable sensors hold great promise for plant phenotyping due to their high spatial resolution, multifunctionality, and minimal invasiveness. A few commercial plant wearable sensors are already commercially available. For example, AgriHouse Inc. has released a plant wearable sensor named “Leaf Sensor” for the measurement of plant water level. However, several challenges remain in the transition from concept demonstration to large-scale application, including interference with plant growth, weak bonding interface, limited signal type, and small monitoring coverage. We have summarized these challenges and provided potential solutions:
1. Interfering with plant growth. While wearable sensors can be less invasive than some other sampling methods, they can still interfere with plant growth. For example, the weight of the sensor can create pressure on the plant, and the sensor may not grow synchronously with the host plant. Additionally, the sensor can cover stomata, hindering gas exchange, and may reduce light absorption due to its opaqueness. Therefore, to minimize interference plant wearable sensors should be lightweight, soft, stretchable, breathable, and transparent, which can be satisfied from material selection and structural design.
2. Weak bonding interface. To achieve real-time measurements, the wearable sensor must remain attached to the host plant continuously. Thus, a strong bonding interface is required between the sensor and the plant. However, the plant’s epidermis is typically irregular and uneven due to the presence of microstructures such as stomata, mastoid, and villi, which provide limited bonding sites for sensors with smooth surfaces. Previous research has used clamps to fix wearable sensors, but the mechanical pressure can interfere with plant growth. Advanced technology utilizes a morphable thermogel to compensate for the morphological mismatch between the plant and the sensor. More solutions can be inspired by tough hydrogels to address this challenge.
3. Limited signal types. Currently, wearable sensors are electronic devices that convert plant phenotype and environmental information into electrical signals. As a result, only a limited signal type can be collected. For example, current wearable electronic sensors have not been able to measure nitrogen content, a critical phenotype indicator. To obtain more signal types, other devices such as optical and acoustic devices can be integrated into wearable sensors.
4. Small monitoring coverage. While wearable sensors have high spatial resolution, the information they acquire is local. Currently, only a limited number of wearable sensors are attached to a leaf or stem of a plant, which cannot monitor the overall phenotype and environmental information of the host plant, let alone the information of other plants in the same field. To expand the monitoring coverage, numerous wearable sensors are expected to be distributed over the target field to build a dense sensor network system. This requires wearable sensors to be produced at a large scale and low cost.
Conclusion
In this review, we have provided a comprehensive overview of the progress made in the development of wearable sensors for monitoring plant phenotypes (including elongation, leaf temperature, hydration, bioelectric potential, and stress response) and environment (including air temperature, humidity, light, pesticide, and toxic gas). Compared to traditional phenotyping technologies based on optical imaging, wearable sensors have unique advantages, such as high spatial resolution, the ability to easily uncover the impact of environmental factors on phenotypes, and high accuracy in fields, which demonstrate their great potential in plant phenotyping. Although challenges exist, such as interfering with plant growth, weak bonding interfaces, limited signal types, and small monitoring coverage, we have proposed possible solutions. With the continued progress and improvement of wearable sensors, they will markedly accelerate plant phenotyping.
Test React Native App with Jest and React Native Testing Library
Testing is an important part of any software development process. It helps you to ensure that your code is working as expected and that you are not introducing any bugs. In this article we will focus on unit testing by providing a simple example of how to test a React Native component.
Setting up the project
let’s create a simple React Native app and then we will add testing to it.
react-native init AwesomeProject
This will create a newapp in a folder called AwesomeProject. Now we can run the following command to start our app: cd AwesomeProject && yarn start
Configuring the React Native Testing Library:
Install Required Packages: Ensure you have Jest and React Native Testing Library installed in your project. If not, you can install them using npm or yarn:
If you’re using Babel in your project, you might need to add some configuration to your .babelrc or babel.config.js file to make sure Jest can handle importing images and other assets in your tests. Here’s an example of what you might need to add to your Babel configuration:
With the setup done, you can now start writing tests for your React Native components using React Native Testing Library. Here’s a simple example of a test for a component :
Steps. 1. In the rood directory of the project create a folder name __tests__
2. In this file create test suits which are files with the testing code.
// App.test.js
import 'react-native';
import React from 'react';
import App from './../app/App';
import renderer from 'react-test-renderer';
// snapshot test
test('renders correctly', () => {
const snapshot = renderer.create(<App/>).toJSON();
expect(snapshot).toMatchSnapshot();
})
Running Test Case:
yarn test –u (this will create the new snapshot of the testcases or update the old test snapshots)
yarn test (this will match the snapshot with the previous taken snapshot or print the result to console weather it pass or fails.
JEST Features:
The coverage report The Jest coverage report provides detailed information on your test coverage. To show a coverage report in the console, you can simply use the –coverage flag when running the test. A table containing information about coverage is now shown in the console.
–coverage
–coverage –coverageDirectory=’coverage’ (create a visually appealing coverage report)
% Stmts: Percentage of all instructions that were executed at least once by means of tests.
% Branch: Percentage of all branches whose conditions were fulfilled at least once by way of tests and thus passed.
% Funcs: Percentage of all instructions that were called at least once by means of tests.
% Lines: Percentage of all source code lines that were run at least once by way of tests.
The watch plug-in
get quick feedback on code changes.
Jest can now be started with the CLI option –watch to only re-run tests affected by file changes.
‘f’ only re-runs failed tests;
‘u’ triggers an update of all failed snapshots; and
‘i’ launches an interactive mode to update snapshots individually.
Mocking
Mocking is a software development practice used in unit testing. It involves creating fake or mock versions of external dependencies (such as modules, objects, APIs, or databases) that your code under test relies on.
The main purposes of mocking are: Isolation, Control, Speed, Independence.
Snapshot testing is a way to test React components by rendering the component, taking a “snapshot” of the rendered output, and comparing it with a previously approved snapshot. If the output matches the approved snapshot, the test passes; otherwise, it fails.
This is particularly useful when refactoring or making changes to existing components, as snapshot tests can catch regressions in the component’s output.
Types of Test Cases
Unit Tests: Focus on testing individual components or functions in isolation. This is crucial for ensuring that each part of your application works as expected under controlled conditions.
Component Testing (Snapshot tests, prop and state changes, lifecycle methods)
Logic and Utility Testing (Pure functions, utility functions, business logic)
Integration Tests: Test how different parts of your application work together. This could involve testing the integration between components or between components and external services.
Component Integration Testing (Testing interactions between parent and child components)
Redux and State Management Testing (Action creators, reducers, selectors, and integration with components)
Snapshot Tests: As mentioned earlier, snapshot tests allow you to compare the current output of your components against a previously saved snapshot. This is particularly useful for catching unexpected changes in the UI.
End-to-End (E2E) Tests: These tests simulate real user scenarios across the entire application. E2E testing is essential for ensuring that your application works seamlessly from start to finish.
Automate and deploy Android and iOS Builds using Fastlane and Self-Hosted Runners
Introduction :
Why automate Android and iOS builds?
Automating Android and iOS builds focuses on the build and deployment steps in the Software Development Life Cycle (SDLC) to save time. By automating these processes, we reduce manual intervention, minimize errors, and ensure faster and more consistent delivery of application updates.
Continuous Integration and Continuous Deployment (CI/CD) pipelines are crucial in modern mobile app development. They ensure that code changes are automatically built, tested, and deployed, reducing manual effort and the risk of errors.
Introduction to CI/CD Pipeline
Continuous Integration (CI) and Continuous Delivery (CD) are practices that enable development teams to deliver code changes more frequently and reliably.
Continuous Integration (CI): Developers merge their code changes into a central repository. Automated builds and tests are run to ensure that the new code does not introduce any bugs or break existing functionality.
Continuous Delivery (CD): Once code passes CI, it is automatically deployed to a staging environment. From there, it can be released to production with a manual approval step.
Faster Development Cycles: Automated processes reduce the time required for code integration and deployment.
Improved Code Quality: Continuous testing ensures that code changes do not introduce new bugs or regressions.
Enhanced Collaboration: Teams can collaborate more effectively with a streamlined workflow.
Reduced Manual Effort: Automation minimizes manual intervention, reducing human error and freeing up developer time for more critical tasks.
The primary goal is to ensure that code changes are integrated and delivered to production rapidly and safely.
Introduction to Self-Hosted Runners
Self-hosted runners are machines that you manage and maintain to run GitHub Actions workflows. Unlike GitHub-hosted runners, which are managed by GitHub, self-hosted runners provide more control over the hardware, operating system, and software environment.
Step-by-Step Guide
Create a Runner: a. Go to your repository on GitHub. b. Navigate to Settings > Actions > Runners > New self-hosted runner. c. Choose the operating system for your runner (Linux, macOS, or Windows).
Download and Configure the Runner: Follow the provided steps to setup self- hosted runners.
Introduction to Fastlane
Fastlane is an open-source platform designed to streamline the process of building, testing, and releasing mobile applications for iOS and Android. It automates many repetitive tasks in the app development lifecycle, making it easier and faster for developers to deploy their apps.
Setting Up Fastlane for Android and iOS
Installing Fastlane : Fastlane can be installed in multiple ways. The preferred method is with Bundler. Fastlane can also be installed directly through Homebrew (if on macOS). It is possible to use macOS’s system Ruby, but it’s not recommended, as it can be hard to manage dependencies and cause conflicts.
Setting up Fastlane : Navigate your terminal to your project and run ‘Fastlane init’ inside android and iOS directory.
This will create Fastlane folder inside our project android and iOS directory.
Deploy Android and iOS Builds to Firebase
Android fastlane code
iOS fastlane code
Integrating Fastlane with GitHub Actions
Integrating Fastlane with CI/CD (Github Actions) pipelines is essential for automating the build, test, and deployment processes for mobile applications. This integration ensures that each code change is automatically built, tested, and deployed, improving efficiency and reducing the risk of human errors.
Explanation:
Trigger: The pipeline runs on push events to the main branch and on pull requests Or on Workflow_dispatch on manual trigger.
Jobs: a. Build: Checks out the code, sets up the JDK, caches Gradle dependencies, builds the app, runs unit tests, and uploads the APK. b. Deploy: Deploys the apk or aab to Firebase App Distribution/Playstore/Appstore (after the build job succeeds).